SlideShare ist ein Scribd-Unternehmen logo
1 von 136
Downloaden Sie, um offline zu lesen
Keywords:
Error-correcting codes,
low-degree polynomials,
randomness, linear algebra
questions begging to be resolved
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
Of course! Just interpolate.
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
Impossible if errors are adversarial...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits randomly.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits randomly.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
This talk: Yes, and we can do it efficiently!
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits randomly.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
This talk: Yes, and we can do it efficiently!
(“Efficiently decoding Reed-Muller codes from random errors”)
Efficiently decoding
Reed-Muller codes
from random errors
Ramprasad Saptharishi Amir Shpilka Ben Lee Volk
TIFR Tel Aviv University Tel Aviv University
Typical Setting
Alicia Machado
Bobby Jindal
Typical Setting
Alicia Machado sends
Dont vote for Trump
Bobby Jindal
Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal
Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal receives
Dude vote for Trump
Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal receives
Dude vote for Trump
Want to encode messages with redundancies to make it resilient to errors
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
▶ Distance: 2m−r
(lightest codeword: x1x2 ··· xr )
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
▶ Distance: 2m−r
(lightest codeword: x1x2 ··· xr )
▶ Dimension: m
0 + m
1 + ··· + m
r =: m
≤r
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
▶ Distance: 2m−r
(lightest codeword: x1x2 ··· xr )
▶ Dimension: m
0 + m
1 + ··· + m
r =: m
≤r
▶ Rate: dimension/block length = m
≤r /2m
Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M







 := E(m, r)M(v)
Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M







 := E(m, r)M(v)
(Every codeword is spanned by the rows.)
Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M







 := E(m, r)M(v)
(Every codeword is spanned by the rows.)
Also called the inclusion matrix
(M(v) = 1 if and only if “M ⊂ v”).
Decoding RM(m, r)
Worst Case Errors:
d
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
List Decoding: max radius with constant # of words
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
List Decoding: max radius with constant # of words
d [Gopalan-Klivans-Zuckerman08,
Bhowmick-Lovett15]
List decoding radius = d.
Two schools of study
▶ Hamming Model a.k.a worst-case errors
▶ Generally the model of interest for complexity theorists,
▶ Reed-Muller codes are not the best for these (far from optimal
rate-distance tradeoffs).
▶ Shannon Model a.k.a random errors
▶ The standard model for coding theorists,
▶ Recent breakthroughs (e.g. Arıkan’s polar codes),
Two schools of study
▶ Hamming Model a.k.a worst-case errors
▶ Generally the model of interest for complexity theorists,
▶ Reed-Muller codes are not the best for these (far from optimal
rate-distance tradeoffs).
▶ Shannon Model a.k.a random errors
▶ The standard model for coding theorists,
▶ Recent breakthroughs (e.g. Arıkan’s polar codes),
▶ An ongoing research endeavor:
How do Reed-Muller codes perform in the Shannon model?
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 00 0 1
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 0 1
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 01 1 0
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 01 1 0
(almost) equiv: fixed number t ≈ pn of random errors
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
Intuitively, (1 − p)n.
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 1 1
∗ ∗ ∗∗
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 1 1
∗ ∗ ∗∗
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 1 1
∗ ∗ ∗∗
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
Intuitively, (1 − H(p))n. (as n
pn ≈ 2H(p)·n
)
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
[Shannon48] Maximum rate that enables decoding (w.h.p.) is:
1 − p for BEC(p),
1 − H(p) for BSC(p).
Codes achieving this bound called capacity achieving.
Motivating questions for this talk
How well does Reed-Muller codes perform in the Shannon Model?
In BEC(p)?
In BSC(p)?
Motivating questions for this talk
How well does Reed-Muller codes perform in the Shannon Model?
In BEC(p)?
In BSC(p)?
Are they as good as polar codes?
Outline
30
So far
45
Outline
30
So far
45
RM codes
and erasures
0
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Conclusions
and Q&A
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Conclusions
and Q&A
Diagram not to scale
Dual of a code
Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
⊥
= {u : 〈v,u〉 = 0 for every v ∈ }
Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
⊥
= {u : 〈v,u〉 = 0 for every v ∈ }
Parity Check Matrix
(A basis for ⊥
stacked as rows)
PC M v = 0 ⇐⇒ v ∈
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 1 1 0
0 0 1 1 0
1 0 0
0 1 0
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 1 1 0
0 0 1 1 0
1 0 0
0 1 0
0 0 0 0 01 1 0
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Decodable if and only if no non-zero codeword supported on erasures.
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Decodable if and only if no non-zero codeword supported on erasures.
Depends only the erasure pattern
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
Observation: A pattern of erasures are decodeable if and only if the
corresponding columns of the Parity Check Matrix are linearly
independent.
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
Observation: A pattern of erasures are decodeable if and only if the
corresponding columns of the Parity Check Matrix are linearly
independent.
In order for a code to be good for BEC(p), the Parity Check
Matrix of the code must be “robustly high-rank”.
Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of
RM(m, r′
).
Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of
RM(m, r′
).
[m]
≤r′
{0,1}m
v
M
M(v)
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
random. Are they linearly independent with high probability?
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
random. Are they linearly independent with high probability?
r′
:
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
random. Are they linearly independent with high probability?
r′
:
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
OPEN OPEN
From erasures to errors
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
Corollary #1: (high-rate) Decodeable from (1 − o(1)) m
≤r random
errors in RM(m, m − 2r) if r = o( m/log m)
(min distance of RM(m, m − 2r) is 22r
) .
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
(If r =
m
2
− o( m), then m − 2r − 2 = o( m))
Corollary #2: (low-rate) Decodeable from (
1
2 − o(1))2m
random errors
in RM(m,o( m)) (min distance of RM(m, m) is 2m− m
) .
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
(If r =
m
2
− o( m), then m − 2r − 2 = o( m))
Corollary #2: (low-rate) Decodeable from (
1
2 − o(1))2m
random errors
in RM(m,o( m)) (min distance of RM(m, m) is 2m− m
) .
[S-Shpilka-Volk]: Efficient decoding from errors.
Outline
30
So far
45
RM codes
and erasures
0
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
What we want to prove
Theorem [S-Shpilka-Volk]
There exists an efficient algorithm with the following guarantee:
Given a corrupted codeword w = v + errS of
RM(m, m − 2r − 1),
if S happens to be a correctable erasure pattern in
RM(m, m − r − 1),
then the algorithm correctly decodes v from w.
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
Parity Check Matrix for RM(m, m − 2r − 2) is the generator matrix for
RM(m,2r + 1).
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) v = 0
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w =
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w = M
M(u1) + ··· + M(ut )
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w = M
M(u1) + ··· + M(ut )
Have access to
∑
i∈S M(ui ) for every monomial M with deg(M) ≤ 2r + 1.
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w = M
M(u1) + ··· + M(ut )
Have access to
∑
i∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1.
Erasure Correctable Patterns
Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity check matrix are linearly independent.
Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity check matrix are linearly independent.
The parity check matrix for RM(m, m − r − 1) is the generator matrix
for RM(m, r).
Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity check matrix are linearly independent.
The parity check matrix for RM(m, m − r − 1) is the generator matrix
for RM(m, r).
Corollary
A set of patterns S = {u1,..., ut } is erasure-correctable in
RM(m, m − r − 1) if and only if ur
1 ,..., ur
t are linearly independent.
ur
i
is just the vector of degree r monomials evaluated at ui
The Decoding Algorithm
Input: Received word w (= v + errS)
The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
For any arbitrary u ∈ {0,1}m
, we have u ∈ S if and only if there exists a
polynomial g with deg(g) ≤ r such that
∑
i∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
For any arbitrary u ∈ {0,1}m
, we have u ∈ S if and only if there exists a
polynomial g with deg(g) ≤ r such that
∑
i∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
Can be checked by solving a system of linear equations.
Algorithm is straightforward.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof.
ur
1 ··· ur
t
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof.
ur
1 ··· ur
t Row operations
I
0
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Main Lemma (⇒): If u ∈ S, then there is a polynomial g with
deg(g) ≤ r such that
∑
ui ∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Main Lemma (⇒): If u ∈ S, then there is a polynomial g with
deg(g) ≤ r such that
∑
ui ∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
If u = ui , then g = hi satisfies the conditions.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Main Lemma (⇐): If u /∈ S, then there is no polynomial g with
deg(g) ≤ r such that
∑
ui ∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 1: Suppose hi (u) = 1 for some i.
u
1
u1
0
ut
0···
ui
0 1 0 ···
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 1: Suppose hi (u) = 1 for some i.
u
1
u1
0
ut
0···
ui
0 1 0 ···
u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 1: Suppose hi (u) = 1 for some i.
u
1
u1
0
ut
0···
ui
0 1 0 ···
u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.
f (x) = hi (x) · xℓ − (ui )(ℓ) works.
u
1
u1
0
ut
0···
ui
0 ···
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
Look at
∑
hi (x).
u
0
u1
1
ut
1···
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
Look at
∑
hi (x).
u
0
u1
1
ut
1···
f (x) = 1 −
∑
hi (x) works.
u
1
u1
0
ut
0···
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
Look at
∑
hi (x).
u
0
u1
1
ut
1···
f (x) = 1 −
∑
hi (x) works.
u
1
u1
0
ut
0···
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Therefore, if ur
1 ,..., ur
t are linearly independent, then there exists a
polynomial g with deg(g) ≤ r satisfying
∑
ui ∈S
(f ·g)(ui ) = f (u) , for every polynomial f with deg(f ) ≤ r + 1,
if and only if u = ui for some i.
Decoding Algorithm
▶ Input: Received word w (= v + errS)
Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for every
polynomial f with deg(f ) ≤ 2r + 1.
Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for every
polynomial f with deg(f ) ≤ 2r + 1.
▶ For each u ∈ {0,1}m
, solve for a polynomial g with deg(g) ≤ r
satisfying
∑
ui ∈S
(f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1.
If there is a solution, then add u to Corruptions.
Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for every
polynomial f with deg(f ) ≤ 2r + 1.
▶ For each u ∈ {0,1}m
, solve for a polynomial g with deg(g) ≤ r
satisfying
∑
ui ∈S
(f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1.
If there is a solution, then add u to Corruptions.
▶ Flip the coordinates in Corruptions and interpolate.
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Conclusions
and Q&A
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in
RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in
RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m
≤r
random
errors in RM(m, m − 2r) if r = o( m/log m).
(min distance of RM(m, m − 2r) is 22r
)
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in
RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m
≤r
random
errors in RM(m, m − 2r) if r = o( m/log m).
(min distance of RM(m, m − 2r) is 22r
)
Corollary #2: (low-rate) Efficiently decodeable from (
1
2
− o(1))2m
random
errors in RM(m,o( m)). (min distance of RM(m, m) is 2m− m
)
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
r :
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
r :
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
OPEN OPEN
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
r :
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
OPEN OPEN
end{document}

Weitere ähnliche Inhalte

Was ist angesagt?

Newton divided difference interpolation
Newton divided difference interpolationNewton divided difference interpolation
Newton divided difference interpolationVISHAL DONGA
 
Amth250 octave matlab some solutions (2)
Amth250 octave matlab some solutions (2)Amth250 octave matlab some solutions (2)
Amth250 octave matlab some solutions (2)asghar123456
 
Fourier 3
Fourier 3Fourier 3
Fourier 3nugon
 
Interpolation In Numerical Methods.
 Interpolation In Numerical Methods. Interpolation In Numerical Methods.
Interpolation In Numerical Methods.Abu Kaisar
 
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IVEngineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IVRai University
 
Newton's forward difference
Newton's forward differenceNewton's forward difference
Newton's forward differenceRaj Parekh
 
Linear approximations and_differentials
Linear approximations and_differentialsLinear approximations and_differentials
Linear approximations and_differentialsTarun Gehlot
 
267 handout 2_partial_derivatives_v2.60
267 handout 2_partial_derivatives_v2.60267 handout 2_partial_derivatives_v2.60
267 handout 2_partial_derivatives_v2.60Ali Adeel
 
Interpolation with unequal interval
Interpolation with unequal intervalInterpolation with unequal interval
Interpolation with unequal intervalDr. Nirav Vyas
 
Chap05 continuous random variables and probability distributions
Chap05 continuous random variables and probability distributionsChap05 continuous random variables and probability distributions
Chap05 continuous random variables and probability distributionsJudianto Nugroho
 
Continuity of functions by graph (exercises with detailed solutions)
Continuity of functions by graph   (exercises with detailed solutions)Continuity of functions by graph   (exercises with detailed solutions)
Continuity of functions by graph (exercises with detailed solutions)Tarun Gehlot
 
Numerical analysis interpolation-III
Numerical analysis  interpolation-IIINumerical analysis  interpolation-III
Numerical analysis interpolation-IIIManikanta satyala
 

Was ist angesagt? (20)

Unit 1
Unit 1Unit 1
Unit 1
 
Numerical method (curve fitting)
Numerical method (curve fitting)Numerical method (curve fitting)
Numerical method (curve fitting)
 
Newton divided difference interpolation
Newton divided difference interpolationNewton divided difference interpolation
Newton divided difference interpolation
 
Amth250 octave matlab some solutions (2)
Amth250 octave matlab some solutions (2)Amth250 octave matlab some solutions (2)
Amth250 octave matlab some solutions (2)
 
Fourier 3
Fourier 3Fourier 3
Fourier 3
 
Interpolation In Numerical Methods.
 Interpolation In Numerical Methods. Interpolation In Numerical Methods.
Interpolation In Numerical Methods.
 
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IVEngineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
 
Newton's forward difference
Newton's forward differenceNewton's forward difference
Newton's forward difference
 
104newton solution
104newton solution104newton solution
104newton solution
 
Numerical Method 2
Numerical Method 2Numerical Method 2
Numerical Method 2
 
Linear approximations and_differentials
Linear approximations and_differentialsLinear approximations and_differentials
Linear approximations and_differentials
 
S6 sp
S6 spS6 sp
S6 sp
 
267 handout 2_partial_derivatives_v2.60
267 handout 2_partial_derivatives_v2.60267 handout 2_partial_derivatives_v2.60
267 handout 2_partial_derivatives_v2.60
 
Interpolation with unequal interval
Interpolation with unequal intervalInterpolation with unequal interval
Interpolation with unequal interval
 
Chap05 continuous random variables and probability distributions
Chap05 continuous random variables and probability distributionsChap05 continuous random variables and probability distributions
Chap05 continuous random variables and probability distributions
 
Continuity of functions by graph (exercises with detailed solutions)
Continuity of functions by graph   (exercises with detailed solutions)Continuity of functions by graph   (exercises with detailed solutions)
Continuity of functions by graph (exercises with detailed solutions)
 
C0531519
C0531519C0531519
C0531519
 
Numerical analysis interpolation-III
Numerical analysis  interpolation-IIINumerical analysis  interpolation-III
Numerical analysis interpolation-III
 
Es272 ch5b
Es272 ch5bEs272 ch5b
Es272 ch5b
 
numerical methods
numerical methodsnumerical methods
numerical methods
 

Andere mochten auch

Csr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhaninCsr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhaninCSR2011
 
Termoregolazione e contabilizzazione in condominio
Termoregolazione e contabilizzazione in condominioTermoregolazione e contabilizzazione in condominio
Termoregolazione e contabilizzazione in condominioChiara Ragazzini
 
Stick Figure Harry
Stick Figure HarryStick Figure Harry
Stick Figure Harryolafsson
 
Recent publications on management of Brachial plexus injury
Recent publications on management of Brachial plexus injuryRecent publications on management of Brachial plexus injury
Recent publications on management of Brachial plexus injuryDr. PS Bhandari
 
हिंदी परीयोजना
हिंदी परीयोजनाहिंदी परीयोजना
हिंदी परीयोजनाJayanthi Rao
 
Marco Teorico
Marco TeoricoMarco Teorico
Marco Teorico7304560
 
Pno livret jeune et priere pasteur-yvan-castanou
Pno livret jeune et priere pasteur-yvan-castanouPno livret jeune et priere pasteur-yvan-castanou
Pno livret jeune et priere pasteur-yvan-castanouYvan CASTANOU
 
Top movies of 2010
Top movies of 2010Top movies of 2010
Top movies of 2010annpl
 
Österreichs Open Access Empfehlungen - Eine Erfolgsgeschichte
Österreichs Open Access Empfehlungen - Eine ErfolgsgeschichteÖsterreichs Open Access Empfehlungen - Eine Erfolgsgeschichte
Österreichs Open Access Empfehlungen - Eine ErfolgsgeschichtePatrick Danowski
 
Qualitative Analysis of Routing Protocols in WSN
Qualitative Analysis of Routing Protocols in WSNQualitative Analysis of Routing Protocols in WSN
Qualitative Analysis of Routing Protocols in WSNEswar Publications
 
More sustainable Istanbul
More sustainable IstanbulMore sustainable Istanbul
More sustainable IstanbulAhmad Alshaghel
 
MATRIC CERTIFICATE
MATRIC CERTIFICATEMATRIC CERTIFICATE
MATRIC CERTIFICATElucky Lucky
 
Internship letter
Internship letterInternship letter
Internship letterAhsan Nawaz
 
IELTS for FSWP in CANADA
IELTS for FSWP in CANADAIELTS for FSWP in CANADA
IELTS for FSWP in CANADAobaied
 
Case DPaschoal - 18ª conferência anual ASUG 2015
Case DPaschoal - 18ª conferência anual ASUG 2015Case DPaschoal - 18ª conferência anual ASUG 2015
Case DPaschoal - 18ª conferência anual ASUG 2015Fluig
 
Handout for the course Abstract Argumentation and Interfaces to Argumentative...
Handout for the course Abstract Argumentation and Interfaces to Argumentative...Handout for the course Abstract Argumentation and Interfaces to Argumentative...
Handout for the course Abstract Argumentation and Interfaces to Argumentative...Federico Cerutti
 

Andere mochten auch (20)

Csr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhaninCsr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhanin
 
Termoregolazione e contabilizzazione in condominio
Termoregolazione e contabilizzazione in condominioTermoregolazione e contabilizzazione in condominio
Termoregolazione e contabilizzazione in condominio
 
Stick Figure Harry
Stick Figure HarryStick Figure Harry
Stick Figure Harry
 
Recent publications on management of Brachial plexus injury
Recent publications on management of Brachial plexus injuryRecent publications on management of Brachial plexus injury
Recent publications on management of Brachial plexus injury
 
हिंदी परीयोजना
हिंदी परीयोजनाहिंदी परीयोजना
हिंदी परीयोजना
 
Marco Teorico
Marco TeoricoMarco Teorico
Marco Teorico
 
Pno livret jeune et priere pasteur-yvan-castanou
Pno livret jeune et priere pasteur-yvan-castanouPno livret jeune et priere pasteur-yvan-castanou
Pno livret jeune et priere pasteur-yvan-castanou
 
Oyyal fund raising_2015-16
Oyyal fund raising_2015-16Oyyal fund raising_2015-16
Oyyal fund raising_2015-16
 
Top movies of 2010
Top movies of 2010Top movies of 2010
Top movies of 2010
 
Österreichs Open Access Empfehlungen - Eine Erfolgsgeschichte
Österreichs Open Access Empfehlungen - Eine ErfolgsgeschichteÖsterreichs Open Access Empfehlungen - Eine Erfolgsgeschichte
Österreichs Open Access Empfehlungen - Eine Erfolgsgeschichte
 
Lgpl license (2)
Lgpl license (2)Lgpl license (2)
Lgpl license (2)
 
Qualitative Analysis of Routing Protocols in WSN
Qualitative Analysis of Routing Protocols in WSNQualitative Analysis of Routing Protocols in WSN
Qualitative Analysis of Routing Protocols in WSN
 
More sustainable Istanbul
More sustainable IstanbulMore sustainable Istanbul
More sustainable Istanbul
 
MATRIC CERTIFICATE
MATRIC CERTIFICATEMATRIC CERTIFICATE
MATRIC CERTIFICATE
 
Internship letter
Internship letterInternship letter
Internship letter
 
ATPG of reversible circuits
ATPG of reversible circuitsATPG of reversible circuits
ATPG of reversible circuits
 
IELTS for FSWP in CANADA
IELTS for FSWP in CANADAIELTS for FSWP in CANADA
IELTS for FSWP in CANADA
 
Case DPaschoal - 18ª conferência anual ASUG 2015
Case DPaschoal - 18ª conferência anual ASUG 2015Case DPaschoal - 18ª conferência anual ASUG 2015
Case DPaschoal - 18ª conferência anual ASUG 2015
 
Handout for the course Abstract Argumentation and Interfaces to Argumentative...
Handout for the course Abstract Argumentation and Interfaces to Argumentative...Handout for the course Abstract Argumentation and Interfaces to Argumentative...
Handout for the course Abstract Argumentation and Interfaces to Argumentative...
 
Rule Based System
Rule Based SystemRule Based System
Rule Based System
 

Ähnlich wie Efficiently decoding Reed-Muller codes from random errors

Polynomials(10th) Simplified
Polynomials(10th) SimplifiedPolynomials(10th) Simplified
Polynomials(10th) SimplifiedSajeel Khan
 
3.1 Characteristics of Polynomial Functions.pptx
3.1 Characteristics of Polynomial Functions.pptx3.1 Characteristics of Polynomial Functions.pptx
3.1 Characteristics of Polynomial Functions.pptxAlaa480924
 
STAT 253 Probability and Statistics UNIT II.pdf
STAT 253 Probability and Statistics UNIT II.pdfSTAT 253 Probability and Statistics UNIT II.pdf
STAT 253 Probability and Statistics UNIT II.pdfsomenewguyontheweb
 
Limit, Continuity and Differentiability for JEE Main 2014
Limit, Continuity and Differentiability for JEE Main 2014Limit, Continuity and Differentiability for JEE Main 2014
Limit, Continuity and Differentiability for JEE Main 2014Ednexa
 
Wcsmo_presentation.pdf
Wcsmo_presentation.pdfWcsmo_presentation.pdf
Wcsmo_presentation.pdfZiaGondal1
 
Aakash JEE Advance Solution Paper1
Aakash JEE Advance Solution Paper1Aakash JEE Advance Solution Paper1
Aakash JEE Advance Solution Paper1askiitians
 
Seismic presentation for Dip correction for convolution modelling
Seismic presentation for Dip correction for convolution modellingSeismic presentation for Dip correction for convolution modelling
Seismic presentation for Dip correction for convolution modellingHarshal Pende
 
Review for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docxReview for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docxjoellemurphey
 
GR 8 Math Powerpoint about Polynomial Techniques
GR 8 Math Powerpoint about Polynomial TechniquesGR 8 Math Powerpoint about Polynomial Techniques
GR 8 Math Powerpoint about Polynomial Techniquesreginaatin
 
Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01Asad Bukhari
 
Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01Nur Kamila
 
Additional Mathematics form 4 (formula)
Additional Mathematics form 4 (formula)Additional Mathematics form 4 (formula)
Additional Mathematics form 4 (formula)Fatini Adnan
 
Stat 101 formulae sheet
Stat 101   formulae sheetStat 101   formulae sheet
Stat 101 formulae sheetSamiya Yesmin
 
1531 fourier series- integrals and trans
1531 fourier series- integrals and trans1531 fourier series- integrals and trans
1531 fourier series- integrals and transDr Fereidoun Dejahang
 

Ähnlich wie Efficiently decoding Reed-Muller codes from random errors (20)

Probability-1.pptx
Probability-1.pptxProbability-1.pptx
Probability-1.pptx
 
Polynomials(10th) Simplified
Polynomials(10th) SimplifiedPolynomials(10th) Simplified
Polynomials(10th) Simplified
 
Application of derivatives
Application of derivativesApplication of derivatives
Application of derivatives
 
3.1 Characteristics of Polynomial Functions.pptx
3.1 Characteristics of Polynomial Functions.pptx3.1 Characteristics of Polynomial Functions.pptx
3.1 Characteristics of Polynomial Functions.pptx
 
STAT 253 Probability and Statistics UNIT II.pdf
STAT 253 Probability and Statistics UNIT II.pdfSTAT 253 Probability and Statistics UNIT II.pdf
STAT 253 Probability and Statistics UNIT II.pdf
 
Limit, Continuity and Differentiability for JEE Main 2014
Limit, Continuity and Differentiability for JEE Main 2014Limit, Continuity and Differentiability for JEE Main 2014
Limit, Continuity and Differentiability for JEE Main 2014
 
Wcsmo_presentation.pdf
Wcsmo_presentation.pdfWcsmo_presentation.pdf
Wcsmo_presentation.pdf
 
Polynomials
PolynomialsPolynomials
Polynomials
 
Aakash JEE Advance Solution Paper1
Aakash JEE Advance Solution Paper1Aakash JEE Advance Solution Paper1
Aakash JEE Advance Solution Paper1
 
Dsp manual
Dsp manualDsp manual
Dsp manual
 
Seismic presentation for Dip correction for convolution modelling
Seismic presentation for Dip correction for convolution modellingSeismic presentation for Dip correction for convolution modelling
Seismic presentation for Dip correction for convolution modelling
 
Review for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docxReview for the Third Midterm of Math 150 B 11242014Probl.docx
Review for the Third Midterm of Math 150 B 11242014Probl.docx
 
GR 8 Math Powerpoint about Polynomial Techniques
GR 8 Math Powerpoint about Polynomial TechniquesGR 8 Math Powerpoint about Polynomial Techniques
GR 8 Math Powerpoint about Polynomial Techniques
 
Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01
 
Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01Spm add-maths-formula-list-form4-091022090639-phpapp01
Spm add-maths-formula-list-form4-091022090639-phpapp01
 
Additional Mathematics form 4 (formula)
Additional Mathematics form 4 (formula)Additional Mathematics form 4 (formula)
Additional Mathematics form 4 (formula)
 
Image transforms
Image transformsImage transforms
Image transforms
 
Stat 101 formulae sheet
Stat 101   formulae sheetStat 101   formulae sheet
Stat 101 formulae sheet
 
1531 fourier series- integrals and trans
1531 fourier series- integrals and trans1531 fourier series- integrals and trans
1531 fourier series- integrals and trans
 
Evaluating definite integrals
Evaluating definite integralsEvaluating definite integrals
Evaluating definite integrals
 

Mehr von cseiitgn

A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
 
Alternate Parameterizations
Alternate ParameterizationsAlternate Parameterizations
Alternate Parameterizationscseiitgn
 
Dynamic Parameterized Problems - Algorithms and Complexity
Dynamic Parameterized Problems - Algorithms and ComplexityDynamic Parameterized Problems - Algorithms and Complexity
Dynamic Parameterized Problems - Algorithms and Complexitycseiitgn
 
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse GraphsSpace-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphscseiitgn
 
Hardness of multiple choice problems
Hardness of multiple choice problemsHardness of multiple choice problems
Hardness of multiple choice problemscseiitgn
 
The Chasm at Depth Four, and Tensor Rank : Old results, new insights
The Chasm at Depth Four, and Tensor Rank : Old results, new insightsThe Chasm at Depth Four, and Tensor Rank : Old results, new insights
The Chasm at Depth Four, and Tensor Rank : Old results, new insightscseiitgn
 
Isolation Lemma for Directed Reachability and NL vs. L
Isolation Lemma for Directed Reachability and NL vs. LIsolation Lemma for Directed Reachability and NL vs. L
Isolation Lemma for Directed Reachability and NL vs. Lcseiitgn
 
Unbounded Error Communication Complexity of XOR Functions
Unbounded Error Communication Complexity of XOR FunctionsUnbounded Error Communication Complexity of XOR Functions
Unbounded Error Communication Complexity of XOR Functionscseiitgn
 
Color Coding-Related Techniques
Color Coding-Related TechniquesColor Coding-Related Techniques
Color Coding-Related Techniquescseiitgn
 
Solving connectivity problems via basic Linear Algebra
Solving connectivity problems via basic Linear AlgebraSolving connectivity problems via basic Linear Algebra
Solving connectivity problems via basic Linear Algebracseiitgn
 
Topology Matters in Communication
Topology Matters in CommunicationTopology Matters in Communication
Topology Matters in Communicationcseiitgn
 
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse GraphsSpace-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphscseiitgn
 
Complexity Classes and the Graph Isomorphism Problem
Complexity Classes and the Graph Isomorphism ProblemComplexity Classes and the Graph Isomorphism Problem
Complexity Classes and the Graph Isomorphism Problemcseiitgn
 

Mehr von cseiitgn (13)

A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...
 
Alternate Parameterizations
Alternate ParameterizationsAlternate Parameterizations
Alternate Parameterizations
 
Dynamic Parameterized Problems - Algorithms and Complexity
Dynamic Parameterized Problems - Algorithms and ComplexityDynamic Parameterized Problems - Algorithms and Complexity
Dynamic Parameterized Problems - Algorithms and Complexity
 
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse GraphsSpace-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
 
Hardness of multiple choice problems
Hardness of multiple choice problemsHardness of multiple choice problems
Hardness of multiple choice problems
 
The Chasm at Depth Four, and Tensor Rank : Old results, new insights
The Chasm at Depth Four, and Tensor Rank : Old results, new insightsThe Chasm at Depth Four, and Tensor Rank : Old results, new insights
The Chasm at Depth Four, and Tensor Rank : Old results, new insights
 
Isolation Lemma for Directed Reachability and NL vs. L
Isolation Lemma for Directed Reachability and NL vs. LIsolation Lemma for Directed Reachability and NL vs. L
Isolation Lemma for Directed Reachability and NL vs. L
 
Unbounded Error Communication Complexity of XOR Functions
Unbounded Error Communication Complexity of XOR FunctionsUnbounded Error Communication Complexity of XOR Functions
Unbounded Error Communication Complexity of XOR Functions
 
Color Coding-Related Techniques
Color Coding-Related TechniquesColor Coding-Related Techniques
Color Coding-Related Techniques
 
Solving connectivity problems via basic Linear Algebra
Solving connectivity problems via basic Linear AlgebraSolving connectivity problems via basic Linear Algebra
Solving connectivity problems via basic Linear Algebra
 
Topology Matters in Communication
Topology Matters in CommunicationTopology Matters in Communication
Topology Matters in Communication
 
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse GraphsSpace-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
 
Complexity Classes and the Graph Isomorphism Problem
Complexity Classes and the Graph Isomorphism ProblemComplexity Classes and the Graph Isomorphism Problem
Complexity Classes and the Graph Isomorphism Problem
 

Kürzlich hochgeladen

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...Nguyen Thanh Tu Collection
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...pradhanghanshyam7136
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSCeline George
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxAreebaZafar22
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17Celine George
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxDenish Jangid
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701bronxfugly43
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseAnaAcapella
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxcallscotland1987
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxAmanpreet Kaur
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...Poonam Aher Patil
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentationcamerronhm
 

Kürzlich hochgeladen (20)

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 

Efficiently decoding Reed-Muller codes from random errors

  • 1. Keywords: Error-correcting codes, low-degree polynomials, randomness, linear algebra questions begging to be resolved
  • 2. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m.
  • 3. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
  • 4. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  • 5. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? Of course! Just interpolate.
  • 6. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  • 7. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  • 8. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  • 9. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  • 10. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? Impossible if errors are adversarial...
  • 11. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits randomly. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  • 12. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits randomly. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? This talk: Yes, and we can do it efficiently!
  • 13. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits randomly. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? This talk: Yes, and we can do it efficiently! (“Efficiently decoding Reed-Muller codes from random errors”)
  • 14. Efficiently decoding Reed-Muller codes from random errors Ramprasad Saptharishi Amir Shpilka Ben Lee Volk TIFR Tel Aviv University Tel Aviv University
  • 16. Typical Setting Alicia Machado sends Dont vote for Trump Bobby Jindal
  • 17. Typical Setting Alicia Machado sends Dont vote for Trump Channel corrupts the message... Bobby Jindal
  • 18. Typical Setting Alicia Machado sends Dont vote for Trump Channel corrupts the message... Bobby Jindal receives Dude vote for Trump
  • 19. Typical Setting Alicia Machado sends Dont vote for Trump Channel corrupts the message... Bobby Jindal receives Dude vote for Trump Want to encode messages with redundancies to make it resilient to errors
  • 20. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
  • 21. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with
  • 22. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n
  • 23. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n ▶ Distance: 2m−r (lightest codeword: x1x2 ··· xr )
  • 24. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n ▶ Distance: 2m−r (lightest codeword: x1x2 ··· xr ) ▶ Dimension: m 0 + m 1 + ··· + m r =: m ≤r
  • 25. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n ▶ Distance: 2m−r (lightest codeword: x1x2 ··· xr ) ▶ Dimension: m 0 + m 1 + ··· + m r =: m ≤r ▶ Rate: dimension/block length = m ≤r /2m
  • 26. Reed-Muller Codes: RM(m, r) Generator Matrix: evaluation matrix of deg ≤ r monomials: v ∈ m 2 M         := E(m, r)M(v)
  • 27. Reed-Muller Codes: RM(m, r) Generator Matrix: evaluation matrix of deg ≤ r monomials: v ∈ m 2 M         := E(m, r)M(v) (Every codeword is spanned by the rows.)
  • 28. Reed-Muller Codes: RM(m, r) Generator Matrix: evaluation matrix of deg ≤ r monomials: v ∈ m 2 M         := E(m, r)M(v) (Every codeword is spanned by the rows.) Also called the inclusion matrix (M(v) = 1 if and only if “M ⊂ v”).
  • 29. Decoding RM(m, r) Worst Case Errors: d
  • 30. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2
  • 31. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2 (algorithm by [Reed54])
  • 32. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2 (algorithm by [Reed54]) List Decoding: max radius with constant # of words
  • 33. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2 (algorithm by [Reed54]) List Decoding: max radius with constant # of words d [Gopalan-Klivans-Zuckerman08, Bhowmick-Lovett15] List decoding radius = d.
  • 34. Two schools of study ▶ Hamming Model a.k.a worst-case errors ▶ Generally the model of interest for complexity theorists, ▶ Reed-Muller codes are not the best for these (far from optimal rate-distance tradeoffs). ▶ Shannon Model a.k.a random errors ▶ The standard model for coding theorists, ▶ Recent breakthroughs (e.g. Arıkan’s polar codes),
  • 35. Two schools of study ▶ Hamming Model a.k.a worst-case errors ▶ Generally the model of interest for complexity theorists, ▶ Reed-Muller codes are not the best for these (far from optimal rate-distance tradeoffs). ▶ Shannon Model a.k.a random errors ▶ The standard model for coding theorists, ▶ Recent breakthroughs (e.g. Arıkan’s polar codes), ▶ An ongoing research endeavor: How do Reed-Muller codes perform in the Shannon model?
  • 36. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p
  • 37. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 00 0 1
  • 38. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ?
  • 39. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p
  • 40. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 0 1
  • 41. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 01 1 0
  • 42. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 01 1 0 (almost) equiv: fixed number t ≈ pn of random errors
  • 43. Channel Capacity Question: Given a channel, what is the best rate we can hope for?
  • 44. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ?
  • 45. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ?
  • 46. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ? Intuitively, (1 − p)n.
  • 47. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 1 1 ∗ ∗ ∗∗
  • 48. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 1 1 ∗ ∗ ∗∗ If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ?
  • 49. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 1 1 ∗ ∗ ∗∗ If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ? Intuitively, (1 − H(p))n. (as n pn ≈ 2H(p)·n )
  • 50. Channel Capacity Question: Given a channel, what is the best rate we can hope for? [Shannon48] Maximum rate that enables decoding (w.h.p.) is: 1 − p for BEC(p), 1 − H(p) for BSC(p). Codes achieving this bound called capacity achieving.
  • 51.
  • 52. Motivating questions for this talk How well does Reed-Muller codes perform in the Shannon Model? In BEC(p)? In BSC(p)?
  • 53. Motivating questions for this talk How well does Reed-Muller codes perform in the Shannon Model? In BEC(p)? In BSC(p)? Are they as good as polar codes?
  • 56. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20
  • 57. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20 Conclusions and Q&A
  • 58. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20 Conclusions and Q&A Diagram not to scale
  • 59. Dual of a code
  • 60. Dual of a code Any linear space can be specified by a generating basis, or a solution to a system of constraints.
  • 61. Dual of a code Any linear space can be specified by a generating basis, or a solution to a system of constraints. ⊥ = {u : 〈v,u〉 = 0 for every v ∈ }
  • 62. Dual of a code Any linear space can be specified by a generating basis, or a solution to a system of constraints. ⊥ = {u : 〈v,u〉 = 0 for every v ∈ } Parity Check Matrix (A basis for ⊥ stacked as rows) PC M v = 0 ⇐⇒ v ∈
  • 63. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures?
  • 64. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 1 1 0 0 0 1 1 0 1 0 0 0 1 0
  • 65. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 1 1 0 0 0 1 1 0 1 0 0 0 1 0 0 0 0 0 01 1 0
  • 66. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0
  • 67. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0 Decodable if and only if no non-zero codeword supported on erasures.
  • 68. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0 Decodable if and only if no non-zero codeword supported on erasures. Depends only the erasure pattern
  • 69. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0
  • 70. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? Observation: A pattern of erasures are decodeable if and only if the corresponding columns of the Parity Check Matrix are linearly independent.
  • 71. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? Observation: A pattern of erasures are decodeable if and only if the corresponding columns of the Parity Check Matrix are linearly independent. In order for a code to be good for BEC(p), the Parity Check Matrix of the code must be “robustly high-rank”.
  • 72. Reed-Muller codes under erasures Cool Fact The dual of RM(m, r) is RM(m, r′ ) where r′ = m − r − 1.
  • 73. Reed-Muller codes under erasures Cool Fact The dual of RM(m, r) is RM(m, r′ ) where r′ = m − r − 1. Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of RM(m, r′ ).
  • 74. Reed-Muller codes under erasures Cool Fact The dual of RM(m, r) is RM(m, r′ ) where r′ = m − r − 1. Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of RM(m, r′ ). [m] ≤r′ {0,1}m v M M(v)
  • 75. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v)
  • 76. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v) Question: Let R = m ≤r′ . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability?
  • 77. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v) Question: Let R = m ≤r′ . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r′ : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16]
  • 78. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v) Question: Let R = m ≤r′ . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r′ : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16] OPEN OPEN
  • 79. From erasures to errors Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
  • 80. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
  • 81. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2). Corollary #1: (high-rate) Decodeable from (1 − o(1)) m ≤r random errors in RM(m, m − 2r) if r = o( m/log m) (min distance of RM(m, m − 2r) is 22r ) .
  • 82. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
  • 83. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2). (If r = m 2 − o( m), then m − 2r − 2 = o( m)) Corollary #2: (low-rate) Decodeable from ( 1 2 − o(1))2m random errors in RM(m,o( m)) (min distance of RM(m, m) is 2m− m ) .
  • 84. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2). (If r = m 2 − o( m), then m − 2r − 2 = o( m)) Corollary #2: (low-rate) Decodeable from ( 1 2 − o(1))2m random errors in RM(m,o( m)) (min distance of RM(m, m) is 2m− m ) . [S-Shpilka-Volk]: Efficient decoding from errors.
  • 86. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20
  • 87. What we want to prove Theorem [S-Shpilka-Volk] There exists an efficient algorithm with the following guarantee: Given a corrupted codeword w = v + errS of RM(m, m − 2r − 1), if S happens to be a correctable erasure pattern in RM(m, m − r − 1), then the algorithm correctly decodes v from w.
  • 88. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }.
  • 89. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. Parity Check Matrix for RM(m, m − 2r − 2) is the generator matrix for RM(m,2r + 1).
  • 90. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) v = 0
  • 91. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w =
  • 92. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w = M M(u1) + ··· + M(ut )
  • 93. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w = M M(u1) + ··· + M(ut ) Have access to ∑ i∈S M(ui ) for every monomial M with deg(M) ≤ 2r + 1.
  • 94. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w = M M(u1) + ··· + M(ut ) Have access to ∑ i∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1.
  • 96. Erasure Correctable Patterns A pattern of erasures is correctable if and only if the corresponding columns in the parity check matrix are linearly independent.
  • 97. Erasure Correctable Patterns A pattern of erasures is correctable if and only if the corresponding columns in the parity check matrix are linearly independent. The parity check matrix for RM(m, m − r − 1) is the generator matrix for RM(m, r).
  • 98. Erasure Correctable Patterns A pattern of erasures is correctable if and only if the corresponding columns in the parity check matrix are linearly independent. The parity check matrix for RM(m, m − r − 1) is the generator matrix for RM(m, r). Corollary A set of patterns S = {u1,..., ut } is erasure-correctable in RM(m, m − r − 1) if and only if ur 1 ,..., ur t are linearly independent. ur i is just the vector of degree r monomials evaluated at ui
  • 99. The Decoding Algorithm Input: Received word w (= v + errS)
  • 100. The Decoding Algorithm Input: Received word w (= v + errS) Lemma Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
  • 101. The Decoding Algorithm Input: Received word w (= v + errS) Lemma Assume that S is a pattern of erasures correctable in RM(m, m − r − 1). For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists a polynomial g with deg(g) ≤ r such that ∑ i∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
  • 102. The Decoding Algorithm Input: Received word w (= v + errS) Lemma Assume that S is a pattern of erasures correctable in RM(m, m − r − 1). For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists a polynomial g with deg(g) ≤ r such that ∑ i∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1. Can be checked by solving a system of linear equations. Algorithm is straightforward.
  • 103. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  • 104. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Proof. ur 1 ··· ur t
  • 105. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Proof. ur 1 ··· ur t Row operations I 0
  • 106. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  • 107. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Main Lemma (⇒): If u ∈ S, then there is a polynomial g with deg(g) ≤ r such that ∑ ui ∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
  • 108. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Main Lemma (⇒): If u ∈ S, then there is a polynomial g with deg(g) ≤ r such that ∑ ui ∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1. If u = ui , then g = hi satisfies the conditions.
  • 109. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  • 110. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  • 111. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
  • 112. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Main Lemma (⇐): If u /∈ S, then there is no polynomial g with deg(g) ≤ r such that ∑ ui ∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
  • 113. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof
  • 114. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 1: Suppose hi (u) = 1 for some i. u 1 u1 0 ut 0··· ui 0 1 0 ···
  • 115. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 1: Suppose hi (u) = 1 for some i. u 1 u1 0 ut 0··· ui 0 1 0 ··· u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.
  • 116. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 1: Suppose hi (u) = 1 for some i. u 1 u1 0 ut 0··· ui 0 1 0 ··· u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ. f (x) = hi (x) · xℓ − (ui )(ℓ) works. u 1 u1 0 ut 0··· ui 0 ···
  • 117. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i.
  • 118. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i. Look at ∑ hi (x). u 0 u1 1 ut 1···
  • 119. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i. Look at ∑ hi (x). u 0 u1 1 ut 1··· f (x) = 1 − ∑ hi (x) works. u 1 u1 0 ut 0···
  • 120. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i. Look at ∑ hi (x). u 0 u1 1 ut 1··· f (x) = 1 − ∑ hi (x) works. u 1 u1 0 ut 0···
  • 121. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
  • 122. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Therefore, if ur 1 ,..., ur t are linearly independent, then there exists a polynomial g with deg(g) ≤ r satisfying ∑ ui ∈S (f ·g)(ui ) = f (u) , for every polynomial f with deg(f ) ≤ r + 1, if and only if u = ui for some i.
  • 123. Decoding Algorithm ▶ Input: Received word w (= v + errS)
  • 124. Decoding Algorithm ▶ Input: Received word w (= v + errS) ▶ Compute E(m,2r + 1)w to get the value of ∑ ui ∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1.
  • 125. Decoding Algorithm ▶ Input: Received word w (= v + errS) ▶ Compute E(m,2r + 1)w to get the value of ∑ ui ∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1. ▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g) ≤ r satisfying ∑ ui ∈S (f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1. If there is a solution, then add u to Corruptions.
  • 126. Decoding Algorithm ▶ Input: Received word w (= v + errS) ▶ Compute E(m,2r + 1)w to get the value of ∑ ui ∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1. ▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g) ≤ r satisfying ∑ ui ∈S (f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1. If there is a solution, then add u to Corruptions. ▶ Flip the coordinates in Corruptions and interpolate.
  • 127. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20
  • 128. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20 Conclusions and Q&A
  • 129. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m)
  • 130. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
  • 131. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2). Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m ≤r random errors in RM(m, m − 2r) if r = o( m/log m). (min distance of RM(m, m − 2r) is 22r )
  • 132. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2). Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m ≤r random errors in RM(m, m − 2r) if r = o( m/log m). (min distance of RM(m, m − 2r) is 22r ) Corollary #2: (low-rate) Efficiently decodeable from ( 1 2 − o(1))2m random errors in RM(m,o( m)). (min distance of RM(m, m) is 2m− m )
  • 133. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability?
  • 134. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16]
  • 135. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16] OPEN OPEN
  • 136. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16] OPEN OPEN end{document}