result management system report for college project
Auto-encoding variational bayes
1. Auto-Encoding Variational Bayes
Diederik P. Kingma, Max Welling
Machine Learning Group Universiteit van Amsterdam
ICLR 2014 conference submission, Cited by 4655
May 23, 2019
SMC AI Research Center
Kyuri Kim
2. Introduction
1.1 Supervised learning vs. Unsupervised learning
Supervised learning Ex. Classification
Regression
Object Detection
Semantic Segmentation
Image captioning
:
Data: (x, y)
x is data, y is label
Learn a function to map x → y
Unsupervised learning
Ex. Clustering
Dimensionality Reduction
Density estimation
Feature Learning
:
Data: (x)
Just data, no label
Learn something underlying
hidden structure of the data
1-d Density estimation 2-d Density estimation
Ex.
Image Detection
Ex.
2
3. Introduction
1.2 Auto Encoder (Feature Learning)
1. Unsupervised Learning
2. ML density estimation
3. Manifold Learning
4. Generative model learning
In Auto Encoder Training:
In Trained Auto Encoder :
Encoding Decoding
Latent Variable
𝝌 𝒙
L (𝝌, 𝒚)
Reconstruction Error
Decoder는 최소한의 학습 데이터는 생성해 낼 수 있고,
Encoder는 최소한의 학습 데이터는 latent vector로 표현할 수 있다.
Minimize
𝒁
3
𝑥)
4. Introduction
1.3 Generative Model
Given training data, generate new samples from same distribution
Ex. Variational Auto encoders (VAE), Generative Adversarial Network(GAN)
Generated samples ~ 𝑝 𝑚𝑜𝑑𝑒𝑙(𝑥)Training data ~ 𝑝 𝑑𝑎𝑡𝑎(𝑥)
Want to learn 𝒑 𝒎𝒐𝒅𝒆𝒍 𝒙 similar to ~ 𝒑 𝒅𝒂𝒕𝒂(𝒙)
Probability density function
4
5. Introduction
1.4 Generative Model Network
Generative model taxonomy
Ian Goodfellow, "NIPS 2016 Tutorial: Generative Adversarial Networks"
5
6. Experiments
2.1 Variational Auto-Encoder
Target Data
𝝌
Latent Variable
𝑧
P(𝑥|𝑧)P(𝑧)
Sample from true prior Sample from true conditional
Decoder Network
How to Train the Model?
6
Maximum likelihood
estimation
7. Experiments
2.2 Variation Inference
Variation InferenceTarget Data
Generator
𝒈 𝜽(.)
𝝌
Latent Variable
𝑝 (𝑧|𝑥) ≈ 𝑞 𝜙(𝑧|𝑥)~𝑧
sampling 𝑧 from 𝑝(𝑧|𝑥)
Z를 정규분포에서 Sampling하는 것 보다 x와 유의미하게 Sample이 나올 수 있는 확률 분포 𝑝(𝑧|𝑥)로부터
Sampling. 그러나 𝑝(𝑧|𝑥)가 무엇인지 알지 못하므로, 우리가 알고 있는 확률분포 중 하나를 택하여 𝑞 𝜙(𝑧|𝑥)
그것의 parameter 값을 조정, 𝑝(𝑧|𝑥)과 유사하게 만든다.
7
𝑧
8. Experiments
2.3 Variational Auto-Encoder
(1) Definition of VAE
Variational Inference를 Auto Encoder의 구조를 통해 구현한 Generate Model.
(2) Structure of VAE When z is deterministic value
On the respects of probability
Where z is in a distribution
X가 주어졌을 때 z의 확률 Variational Inference
𝑧
𝒈 𝜽(.)𝒒∅(.)
𝝌𝝌
𝑝(𝑧|𝑥) ≈ 𝑝(𝑧)
8