SlideShare ist ein Scribd-Unternehmen logo
1 von 38
Detailed Description
on Cross Entropy Loss Function
ICSL Seminar
김범준
2019. 01. 03
 Cross Entropy Loss
- Classification 문제에서 범용적으로 사용
- Prediction과 Label 사이의 Cross Entropy를 계산
- 구체적인 이론적 근거 조사, 직관적 의미 해석
𝐻 𝑃, 𝑄 = −
𝑖=1
𝑐
𝑝𝑖 𝑙𝑜𝑔(𝑞𝑖)
• Theoretical Derivation
- Binary Classification Problem
- Multiclass Classification Problem
• Intuitive understanding
- Relation to the KL-Divergence
• Theoretical Derivation
- Binary Classification Problem
- Multiclass Classification Problem
• Intuitive understanding
- Relation to the KL-Divergence
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0
Image Classifier Prediction Label
NN
𝑥2 𝜃
ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0
Image Classifier Prediction Label
NN
𝑥2 𝜃
ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1
[0, 0, 0, 1, 1, 1]
𝑦1, … , 𝑦 𝑚𝑥1, … , 𝑥 𝑚
: Training Dataset
𝜃
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0
Image Classifier Prediction Label
NN
𝑥2 𝜃
ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1
[0, 0, 0, 1, 1, 1]
𝑦1, … , 𝑦 𝑚𝑥1, … , 𝑥 𝑚
𝐿𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑: 𝐿 𝜃 = 𝑝(𝑦1, … , 𝑦 𝑚|𝑥1, … , 𝑥 𝑚; 𝜃)
: Training Dataset
𝜃
: 에 의해 [0, 0, 0, 1, 1, 1]로 Prediction이 나올법한 정도𝜃
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
입력 image예측 label
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0
Image Classifier Prediction Label
NN
𝑥2 𝜃
ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1
[0, 0, 0, 1, 1, 1]
𝑦1, … , 𝑦 𝑚𝑥1, … , 𝑥 𝑚
𝐿𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑: 𝐿 𝜃 = 𝑝(𝑦1, … , 𝑦 𝑚|𝑥1, … , 𝑥 𝑚; 𝜃)
𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝐿𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑: 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥(𝐿(𝜃))
: [0, 0, 0, 1, 1, 1]로 Prediction이 가장 나올법한 를 선택한다
𝜃
: 에 의해 [0, 0, 0, 1, 1, 1]로 Prediction이 나올법한 정도𝜃
𝜃
: Training Dataset
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
Image Classifier Prediction Label
NN
𝑥2 𝜃
ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1
𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖)
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
Image Classifier Prediction Label
𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖)
NN
𝑥2 𝜃
ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1
𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖)
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖)
𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖)
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖)
𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖)
즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖
𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖
1−𝑦 𝑖
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
: 베르누이 분포
𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖)
𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖)
즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖
𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖
1−𝑦 𝑖
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝐿 𝜃 = 𝑝 𝑦1, … , 𝑦 𝑚 𝑥1, … , 𝑥 𝑚; 𝜃
=
𝑖=1
𝑚
𝑝 𝑦𝑖 𝑥𝑖; 𝜃 ∵ 𝑖. 𝑖. 𝑑 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛
* i.i.d : independent and identically distributed
: 베르누이 분포
𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖)
𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖)
즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖
𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖
1−𝑦 𝑖
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝐿 𝜃 = 𝑝 𝑦1, … , 𝑦 𝑚 𝑥1, … , 𝑥 𝑚; 𝜃
=
𝑖=1
𝑚
𝑝 𝑦𝑖 𝑥𝑖; 𝜃 ∵ 𝑖. 𝑖. 𝑑 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛
=
𝑖=1
𝑚
ℎ 𝜃 𝑥𝑖
𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖
1−𝑦 𝑖
* i.i.d : independent and identically distributed
: 베르누이 분포
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 (∵log는 단조증가 함수)
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛( 𝑖=1
𝑚
[−𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − (1 − 𝑦𝑖) log(1 − ℎ 𝜃 𝑥𝑖 )]) (∵ 𝑙𝑜𝑔 성질)
𝐿 𝜃 =
𝑖=1
𝑚
ℎ 𝜃 𝑥𝑖
𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖
1−𝑦 𝑖
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛( 𝑖=1
𝑚
[−𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − (1 − 𝑦𝑖) log(1 − ℎ 𝜃 𝑥𝑖 )])
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
𝑤ℎ𝑒𝑟𝑒 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 = −𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − 1 − 𝑦𝑖 log 1 − ℎ 𝜃 𝑥𝑖
: 𝐵𝑖𝑛𝑎𝑟𝑦 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛( 𝑖=1
𝑚
[−𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − (1 − 𝑦𝑖) log(1 − ℎ 𝜃 𝑥𝑖 )])
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
𝑤ℎ𝑒𝑟𝑒 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 = −𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − 1 − 𝑦𝑖 log 1 − ℎ 𝜃 𝑥𝑖
: 𝐵𝑖𝑛𝑎𝑟𝑦 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦
ℎ 𝜃 𝑥𝑖 , 𝑦𝑖 ∈ 0, 1 인 확률값
Maximize Likelihood Minimize Binary Cross Entropy
Binary Classification Problem
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0]
Image Classifier Prediction Label
NN
𝑥2 𝜃
ℎ 𝜃 𝑥2 = [0.03, 𝟎. 𝟗𝟓, 0.02] 𝑦2 = [0, 1, 0]
NN
𝑥3 𝜃
ℎ 𝜃 𝑥3 = [0.01, 0.01, 𝟎. 𝟗𝟖] 𝑦3 = [0, 0, 1]
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0]
Image Classifier Prediction Label
𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃
= 𝑝 𝑦𝑖(0) = 1 𝑥𝑖; 𝜃) (𝐴𝑠𝑠𝑢𝑚𝑒 𝑂𝑛𝑒ℎ𝑜𝑡 𝑒𝑛𝑐𝑜𝑑𝑖𝑛𝑔)
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0]
Image Classifier Prediction Label
𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃
= 𝑝 𝑦𝑖(0) = 1 𝑥𝑖; 𝜃)
= ℎ 𝜃 𝑥𝑖 (0)
NN
𝑥1 𝜃
ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0]
Image Classifier Prediction Label
𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃
= 𝑝 𝑦𝑖(0) = 1 𝑥𝑖; 𝜃)
= ℎ 𝜃 𝑥𝑖 (0)
같은 방법으로,
𝑝 𝑦𝑖 = [0, 1, 0] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 1
𝑝 𝑦𝑖 = [0, 0, 1] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 (2)
𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 (0)
𝑝 𝑦𝑖 = [0, 1, 0] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 1
𝑝 𝑦𝑖 = [0, 0, 1] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 (2)
즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 0 𝑦 𝑖(0)
ℎ 𝜃 𝑥𝑖 1 𝑦 𝑖(1)
ℎ 𝜃 𝑥𝑖 2 𝑦 𝑖(2)
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
[−𝑦𝑖 0 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(0) − 𝑦𝑖 1 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(1) − 𝑦𝑖 2 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(2)]
𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 0 𝑦 𝑖(0)
ℎ 𝜃 𝑥𝑖 1 𝑦 𝑖(1)
ℎ 𝜃 𝑥𝑖 2 𝑦 𝑖(2)
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
[−𝑦𝑖 0 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(0) − 𝑦𝑖 1 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(1) − 𝑦𝑖 2 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(2)]
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
𝑤ℎ𝑒𝑟𝑒 𝐻 𝑃, 𝑄 = −
𝑖=1
𝑐
𝑝𝑖 𝑙𝑜𝑔(𝑞𝑖)
: 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦
𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
[−𝑦𝑖 0 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(0) − 𝑦𝑖 1 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(1) − 𝑦𝑖 2 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(2)]
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
ℎ 𝜃 𝑥𝑖 , 𝑦𝑖는 Probability Distribution
Maximize Likelihood Minimize Cross Entropy
Multiclass Classification Problem
𝑤ℎ𝑒𝑟𝑒 𝐻 𝑃, 𝑄 = −
𝑖=1
𝑐
𝑝𝑖 𝑙𝑜𝑔(𝑞𝑖)
: 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦
• Theoretical Derivation
- Binary Classification Problem
- Multiclass Classification Problem
• Intuitive understanding
- Relation to the KL-Divergence
𝐻 𝑃, 𝑄
=
𝑖=1
𝑐
𝑝𝑖 𝑙𝑜𝑔
1
𝑞𝑖
* KL-Divergence : Kullback–Leibler divergence
𝐻 𝑃, 𝑄
=
𝑖=1
𝑐
𝑝𝑖 𝑙𝑜𝑔
1
𝑞𝑖
=
𝑖=1
𝑐
(𝑝𝑖 𝑙𝑜𝑔
𝑝𝑖
𝑞𝑖
+ 𝑝𝑖 𝑙𝑜𝑔
1
𝑝𝑖
)
𝐻 𝑃, 𝑄
=
𝑖=1
𝑐
𝑝𝑖 𝑙𝑜𝑔
1
𝑞𝑖
=
𝑖=1
𝑐
(𝑝𝑖 𝑙𝑜𝑔
𝑝𝑖
𝑞𝑖
+ 𝑝𝑖 𝑙𝑜𝑔
1
𝑝𝑖
)
= 𝐾𝐿(𝑃| 𝑄 + 𝐻(𝑃)
P 자체가 갖는 entropy
KL-Divergence
Cross-entropy
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
Maximize Likelihood Minimize Cross Entropy
Multiclass Classification Problem
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
(𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) + 𝐻(𝑦𝑖) ) (∵ 𝐻 𝑃, 𝑄 = 𝐾𝐿(𝑃| 𝑄 + 𝐻 𝑃 )
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
(𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) + 𝐻(𝑦𝑖) )
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
(𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) (∵OnehotEncoding된 label의 entropy는 0)
Maximize Likelihood Minimize Cross Entropy
Multiclass Classification Problem
Minimize KL-Divergence
𝜃
= 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
(𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) + 𝐻(𝑦𝑖) )
= 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1
𝑚
(𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) (∵OnehotEncoding된 label의 entropy는 0)
 정보 이론의 관점에서는 KL-divergence를 직관적으로 “놀라움의 정도”로 이해 가능
 (예) 준결승 진출팀 : LG 트윈스, 한화 이글스, NC 다이노스, 삼성 라이온즈
- 예측 모델 1) :
- 예측 모델 2) :
- 경기 결과 :
- 예측 모델 2)에서 더 큰 놀라움을 확인
- 놀라움의 정도를 최소화  Q가 P로 근사됨  두 확률 분포가 닮음  정확한 예측
𝑦 = 𝑃 = [1, 0, 0, 0]
𝑦 = 𝑄 = [𝟎. 𝟗, 0.03, 0.03, 0.04]
𝑦 = 𝑄 = [0.3, 𝟎. 𝟔 0.05, 0.05]
𝐾𝐿(𝑃| 𝑄 =
𝑖=1
𝑐
(𝑝𝑖 𝑙𝑜𝑔
𝑝𝑖
𝑞𝑖
)
Maximize Likelihood Minimize Cross Entropy
Multiclass Classification Problem
Minimize KL-Divergence
Minimize Surprisal
Approximate prediction to label
Better classification performance in general

Weitere ähnliche Inhalte

Was ist angesagt?

Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...Universitat Politècnica de Catalunya
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationMohammed Bennamoun
 
인공신경망
인공신경망인공신경망
인공신경망종열 현
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade offVARUN KUMAR
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machinesnextlib
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
 
Support vector machine
Support vector machineSupport vector machine
Support vector machineRishabh Gupta
 
Deep Learning - CNN and RNN
Deep Learning - CNN and RNNDeep Learning - CNN and RNN
Deep Learning - CNN and RNNAshray Bhandare
 
ViT (Vision Transformer) Review [CDM]
ViT (Vision Transformer) Review [CDM]ViT (Vision Transformer) Review [CDM]
ViT (Vision Transformer) Review [CDM]Dongmin Choi
 
Support vector machines
Support vector machinesSupport vector machines
Support vector machinesUjjawal
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architecturesananth
 
Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkRichard Kuo
 
CNN and its applications by ketaki
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketakiKetaki Patwari
 
Activation functions
Activation functionsActivation functions
Activation functionsPRATEEK SAHU
 
support vector regression
support vector regressionsupport vector regression
support vector regressionAkhilesh Joshi
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function홍배 김
 

Was ist angesagt? (20)

Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
 
인공신경망
인공신경망인공신경망
인공신경망
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade off
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Deep Learning - CNN and RNN
Deep Learning - CNN and RNNDeep Learning - CNN and RNN
Deep Learning - CNN and RNN
 
ViT (Vision Transformer) Review [CDM]
ViT (Vision Transformer) Review [CDM]ViT (Vision Transformer) Review [CDM]
ViT (Vision Transformer) Review [CDM]
 
Support vector machines
Support vector machinesSupport vector machines
Support vector machines
 
Perceptron
PerceptronPerceptron
Perceptron
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architectures
 
Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural Network
 
CNN and its applications by ketaki
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketaki
 
Activation functions
Activation functionsActivation functions
Activation functions
 
support vector regression
support vector regressionsupport vector regression
support vector regression
 
Support Vector machine
Support Vector machineSupport Vector machine
Support Vector machine
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function
 

Ähnlich wie Detailed Description on Cross Entropy Loss Function

07-Convolution.pptx signal spectra and signal processing
07-Convolution.pptx signal spectra and signal processing07-Convolution.pptx signal spectra and signal processing
07-Convolution.pptx signal spectra and signal processingJordanJohmMallillin
 
Deep learning study 2
Deep learning study 2Deep learning study 2
Deep learning study 2San Kim
 
PR 113: The Perception Distortion Tradeoff
PR 113: The Perception Distortion TradeoffPR 113: The Perception Distortion Tradeoff
PR 113: The Perception Distortion TradeoffTaeoh Kim
 
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix MappingDual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mappinginventionjournals
 
publisher in research
publisher in researchpublisher in research
publisher in researchrikaseorika
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Hojin Yang
 
Lecture 5 backpropagation
Lecture 5 backpropagationLecture 5 backpropagation
Lecture 5 backpropagationParveenMalik18
 
Differential Geometry for Machine Learning
Differential Geometry for Machine LearningDifferential Geometry for Machine Learning
Differential Geometry for Machine LearningSEMINARGROOT
 
Functions of severable variables
Functions of severable variablesFunctions of severable variables
Functions of severable variablesSanthanam Krishnan
 
research on journaling
research on journalingresearch on journaling
research on journalingrikaseorika
 
Mentor mix review
Mentor mix reviewMentor mix review
Mentor mix reviewtaeseon ryu
 
Maximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of BeetleMaximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of BeetleLiang Kai Hu
 
Computer aided design
Computer aided designComputer aided design
Computer aided designAbhi23396
 
Regularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD DetectionRegularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD Detectionkirk68
 

Ähnlich wie Detailed Description on Cross Entropy Loss Function (20)

07-Convolution.pptx signal spectra and signal processing
07-Convolution.pptx signal spectra and signal processing07-Convolution.pptx signal spectra and signal processing
07-Convolution.pptx signal spectra and signal processing
 
Deep learning study 2
Deep learning study 2Deep learning study 2
Deep learning study 2
 
PR 113: The Perception Distortion Tradeoff
PR 113: The Perception Distortion TradeoffPR 113: The Perception Distortion Tradeoff
PR 113: The Perception Distortion Tradeoff
 
lec32.ppt
lec32.pptlec32.ppt
lec32.ppt
 
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix MappingDual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
 
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
 
Lec05.pptx
Lec05.pptxLec05.pptx
Lec05.pptx
 
publisher in research
publisher in researchpublisher in research
publisher in research
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial
 
lec19.ppt
lec19.pptlec19.ppt
lec19.ppt
 
Lecture 5 backpropagation
Lecture 5 backpropagationLecture 5 backpropagation
Lecture 5 backpropagation
 
Differential Geometry for Machine Learning
Differential Geometry for Machine LearningDifferential Geometry for Machine Learning
Differential Geometry for Machine Learning
 
Functions of severable variables
Functions of severable variablesFunctions of severable variables
Functions of severable variables
 
research on journaling
research on journalingresearch on journaling
research on journaling
 
Mentor mix review
Mentor mix reviewMentor mix review
Mentor mix review
 
Maximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of BeetleMaximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of Beetle
 
Computer aided design
Computer aided designComputer aided design
Computer aided design
 
Regularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD DetectionRegularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD Detection
 
lec39.ppt
lec39.pptlec39.ppt
lec39.ppt
 
Z transforms
Z transformsZ transforms
Z transforms
 

Kürzlich hochgeladen

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Victor Rentea
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 

Kürzlich hochgeladen (20)

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 

Detailed Description on Cross Entropy Loss Function

  • 1. Detailed Description on Cross Entropy Loss Function ICSL Seminar 김범준 2019. 01. 03
  • 2.  Cross Entropy Loss - Classification 문제에서 범용적으로 사용 - Prediction과 Label 사이의 Cross Entropy를 계산 - 구체적인 이론적 근거 조사, 직관적 의미 해석 𝐻 𝑃, 𝑄 = − 𝑖=1 𝑐 𝑝𝑖 𝑙𝑜𝑔(𝑞𝑖)
  • 3. • Theoretical Derivation - Binary Classification Problem - Multiclass Classification Problem • Intuitive understanding - Relation to the KL-Divergence
  • 4. • Theoretical Derivation - Binary Classification Problem - Multiclass Classification Problem • Intuitive understanding - Relation to the KL-Divergence
  • 5. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0 Image Classifier Prediction Label NN 𝑥2 𝜃 ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1
  • 6. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0 Image Classifier Prediction Label NN 𝑥2 𝜃 ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1 [0, 0, 0, 1, 1, 1] 𝑦1, … , 𝑦 𝑚𝑥1, … , 𝑥 𝑚 : Training Dataset 𝜃
  • 7. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0 Image Classifier Prediction Label NN 𝑥2 𝜃 ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1 [0, 0, 0, 1, 1, 1] 𝑦1, … , 𝑦 𝑚𝑥1, … , 𝑥 𝑚 𝐿𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑: 𝐿 𝜃 = 𝑝(𝑦1, … , 𝑦 𝑚|𝑥1, … , 𝑥 𝑚; 𝜃) : Training Dataset 𝜃 : 에 의해 [0, 0, 0, 1, 1, 1]로 Prediction이 나올법한 정도𝜃 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) 입력 image예측 label
  • 8. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0 Image Classifier Prediction Label NN 𝑥2 𝜃 ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1 [0, 0, 0, 1, 1, 1] 𝑦1, … , 𝑦 𝑚𝑥1, … , 𝑥 𝑚 𝐿𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑: 𝐿 𝜃 = 𝑝(𝑦1, … , 𝑦 𝑚|𝑥1, … , 𝑥 𝑚; 𝜃) 𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝐿𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑: 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥(𝐿(𝜃)) : [0, 0, 0, 1, 1, 1]로 Prediction이 가장 나올법한 를 선택한다 𝜃 : 에 의해 [0, 0, 0, 1, 1, 1]로 Prediction이 나올법한 정도𝜃 𝜃 : Training Dataset 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
  • 9. Image Classifier Prediction Label NN 𝑥2 𝜃 ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1 𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖) 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
  • 10. Image Classifier Prediction Label 𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖) NN 𝑥2 𝜃 ℎ 𝜃 𝑥2 = 0.95 𝑦2 = 1 𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖) NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = 0.1 𝑦1 = 0 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
  • 11. 𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖) 𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖) 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
  • 12. 𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖) 𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖) 즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖 1−𝑦 𝑖 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) : 베르누이 분포
  • 13. 𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖) 𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖) 즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖 1−𝑦 𝑖 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) 𝐿 𝜃 = 𝑝 𝑦1, … , 𝑦 𝑚 𝑥1, … , 𝑥 𝑚; 𝜃 = 𝑖=1 𝑚 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 ∵ 𝑖. 𝑖. 𝑑 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛 * i.i.d : independent and identically distributed : 베르누이 분포
  • 14. 𝑝 𝑦𝑖 = 1 𝑥𝑖; 𝜃 = ℎ 𝜃(𝑥𝑖) 𝑝 𝑦𝑖 = 0 𝑥𝑖; 𝜃 = 1 − ℎ 𝜃(𝑥𝑖) 즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖 1−𝑦 𝑖 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) 𝐿 𝜃 = 𝑝 𝑦1, … , 𝑦 𝑚 𝑥1, … , 𝑥 𝑚; 𝜃 = 𝑖=1 𝑚 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 ∵ 𝑖. 𝑖. 𝑑 𝑎𝑠𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛 = 𝑖=1 𝑚 ℎ 𝜃 𝑥𝑖 𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖 1−𝑦 𝑖 * i.i.d : independent and identically distributed : 베르누이 분포
  • 16. 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 (∵log는 단조증가 함수)
  • 17. 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛( 𝑖=1 𝑚 [−𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − (1 − 𝑦𝑖) log(1 − ℎ 𝜃 𝑥𝑖 )]) (∵ 𝑙𝑜𝑔 성질) 𝐿 𝜃 = 𝑖=1 𝑚 ℎ 𝜃 𝑥𝑖 𝑦 𝑖 1 − ℎ 𝜃 𝑥𝑖 1−𝑦 𝑖
  • 18. 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛( 𝑖=1 𝑚 [−𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − (1 − 𝑦𝑖) log(1 − ℎ 𝜃 𝑥𝑖 )]) = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 𝑤ℎ𝑒𝑟𝑒 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 = −𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − 1 − 𝑦𝑖 log 1 − ℎ 𝜃 𝑥𝑖 : 𝐵𝑖𝑛𝑎𝑟𝑦 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦
  • 19. 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛( 𝑖=1 𝑚 [−𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − (1 − 𝑦𝑖) log(1 − ℎ 𝜃 𝑥𝑖 )]) = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 𝑤ℎ𝑒𝑟𝑒 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 = −𝑦𝑖 log ℎ 𝜃 𝑥𝑖 − 1 − 𝑦𝑖 log 1 − ℎ 𝜃 𝑥𝑖 : 𝐵𝑖𝑛𝑎𝑟𝑦 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦 ℎ 𝜃 𝑥𝑖 , 𝑦𝑖 ∈ 0, 1 인 확률값 Maximize Likelihood Minimize Binary Cross Entropy Binary Classification Problem
  • 20. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0] Image Classifier Prediction Label NN 𝑥2 𝜃 ℎ 𝜃 𝑥2 = [0.03, 𝟎. 𝟗𝟓, 0.02] 𝑦2 = [0, 1, 0] NN 𝑥3 𝜃 ℎ 𝜃 𝑥3 = [0.01, 0.01, 𝟎. 𝟗𝟖] 𝑦3 = [0, 0, 1]
  • 21. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0] Image Classifier Prediction Label 𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃 = 𝑝 𝑦𝑖(0) = 1 𝑥𝑖; 𝜃) (𝐴𝑠𝑠𝑢𝑚𝑒 𝑂𝑛𝑒ℎ𝑜𝑡 𝑒𝑛𝑐𝑜𝑑𝑖𝑛𝑔)
  • 22. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0] Image Classifier Prediction Label 𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃 = 𝑝 𝑦𝑖(0) = 1 𝑥𝑖; 𝜃) = ℎ 𝜃 𝑥𝑖 (0)
  • 23. NN 𝑥1 𝜃 ℎ 𝜃 𝑥1 = [𝟎. 𝟗, 0.05, 0.05] 𝑦1 = [1, 0, 0] Image Classifier Prediction Label 𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃 = 𝑝 𝑦𝑖(0) = 1 𝑥𝑖; 𝜃) = ℎ 𝜃 𝑥𝑖 (0) 같은 방법으로, 𝑝 𝑦𝑖 = [0, 1, 0] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 1 𝑝 𝑦𝑖 = [0, 0, 1] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 (2)
  • 24. 𝑝 𝑦𝑖 = [1, 0, 0] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 (0) 𝑝 𝑦𝑖 = [0, 1, 0] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 1 𝑝 𝑦𝑖 = [0, 0, 1] 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 (2) 즉, 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 0 𝑦 𝑖(0) ℎ 𝜃 𝑥𝑖 1 𝑦 𝑖(1) ℎ 𝜃 𝑥𝑖 2 𝑦 𝑖(2) 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖)
  • 25. 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃
  • 26. 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 [−𝑦𝑖 0 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(0) − 𝑦𝑖 1 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(1) − 𝑦𝑖 2 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(2)] 𝑝 𝑦𝑖 𝑥𝑖; 𝜃 = ℎ 𝜃 𝑥𝑖 0 𝑦 𝑖(0) ℎ 𝜃 𝑥𝑖 1 𝑦 𝑖(1) ℎ 𝜃 𝑥𝑖 2 𝑦 𝑖(2)
  • 27. 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 [−𝑦𝑖 0 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(0) − 𝑦𝑖 1 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(1) − 𝑦𝑖 2 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(2)] = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 𝑤ℎ𝑒𝑟𝑒 𝐻 𝑃, 𝑄 = − 𝑖=1 𝑐 𝑝𝑖 𝑙𝑜𝑔(𝑞𝑖) : 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦
  • 28. 𝑁𝑜𝑡𝑎𝑡𝑖𝑜𝑛: 𝑝 𝑌 = 𝑦𝑖|𝑋 = 𝑥𝑖 = 𝑝(𝑦𝑖|𝑥𝑖) 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛(− 𝑙𝑜𝑔 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 [−𝑦𝑖 0 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(0) − 𝑦𝑖 1 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(1) − 𝑦𝑖 2 𝑙𝑜𝑔ℎ 𝜃(𝑥𝑖)(2)] = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 ℎ 𝜃 𝑥𝑖 , 𝑦𝑖는 Probability Distribution Maximize Likelihood Minimize Cross Entropy Multiclass Classification Problem 𝑤ℎ𝑒𝑟𝑒 𝐻 𝑃, 𝑄 = − 𝑖=1 𝑐 𝑝𝑖 𝑙𝑜𝑔(𝑞𝑖) : 𝐶𝑟𝑜𝑠𝑠 𝐸𝑛𝑡𝑟𝑜𝑝𝑦
  • 29. • Theoretical Derivation - Binary Classification Problem - Multiclass Classification Problem • Intuitive understanding - Relation to the KL-Divergence
  • 30. 𝐻 𝑃, 𝑄 = 𝑖=1 𝑐 𝑝𝑖 𝑙𝑜𝑔 1 𝑞𝑖 * KL-Divergence : Kullback–Leibler divergence
  • 31. 𝐻 𝑃, 𝑄 = 𝑖=1 𝑐 𝑝𝑖 𝑙𝑜𝑔 1 𝑞𝑖 = 𝑖=1 𝑐 (𝑝𝑖 𝑙𝑜𝑔 𝑝𝑖 𝑞𝑖 + 𝑝𝑖 𝑙𝑜𝑔 1 𝑝𝑖 )
  • 32. 𝐻 𝑃, 𝑄 = 𝑖=1 𝑐 𝑝𝑖 𝑙𝑜𝑔 1 𝑞𝑖 = 𝑖=1 𝑐 (𝑝𝑖 𝑙𝑜𝑔 𝑝𝑖 𝑞𝑖 + 𝑝𝑖 𝑙𝑜𝑔 1 𝑝𝑖 ) = 𝐾𝐿(𝑃| 𝑄 + 𝐻(𝑃) P 자체가 갖는 entropy KL-Divergence Cross-entropy
  • 33. 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 Maximize Likelihood Minimize Cross Entropy Multiclass Classification Problem
  • 34. 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 (𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) + 𝐻(𝑦𝑖) ) (∵ 𝐻 𝑃, 𝑄 = 𝐾𝐿(𝑃| 𝑄 + 𝐻 𝑃 )
  • 35. 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 (𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) + 𝐻(𝑦𝑖) ) = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 (𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) (∵OnehotEncoding된 label의 entropy는 0)
  • 36. Maximize Likelihood Minimize Cross Entropy Multiclass Classification Problem Minimize KL-Divergence 𝜃 = 𝑎𝑟𝑔𝑚𝑎𝑥 𝐿 𝜃 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 𝐻 𝑦𝑖, ℎ 𝜃 𝑥𝑖 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 (𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) + 𝐻(𝑦𝑖) ) = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑖=1 𝑚 (𝐾𝐿(𝑦𝑖||ℎ 𝜃 𝑥𝑖 ) (∵OnehotEncoding된 label의 entropy는 0)
  • 37.  정보 이론의 관점에서는 KL-divergence를 직관적으로 “놀라움의 정도”로 이해 가능  (예) 준결승 진출팀 : LG 트윈스, 한화 이글스, NC 다이노스, 삼성 라이온즈 - 예측 모델 1) : - 예측 모델 2) : - 경기 결과 : - 예측 모델 2)에서 더 큰 놀라움을 확인 - 놀라움의 정도를 최소화  Q가 P로 근사됨  두 확률 분포가 닮음  정확한 예측 𝑦 = 𝑃 = [1, 0, 0, 0] 𝑦 = 𝑄 = [𝟎. 𝟗, 0.03, 0.03, 0.04] 𝑦 = 𝑄 = [0.3, 𝟎. 𝟔 0.05, 0.05] 𝐾𝐿(𝑃| 𝑄 = 𝑖=1 𝑐 (𝑝𝑖 𝑙𝑜𝑔 𝑝𝑖 𝑞𝑖 )
  • 38. Maximize Likelihood Minimize Cross Entropy Multiclass Classification Problem Minimize KL-Divergence Minimize Surprisal Approximate prediction to label Better classification performance in general