80. 具体的には?
ニューロン
Deep Learning = Learning Hierarchical Representations
YL
MA Ra
神経細胞をまねた
It's deep if it has more than one stage of non-linear feature
transformation
Low-Level
Mid-Level
Feature
High-Level
Feature
Feature
計算素子
Trainable
Classifier
学習
データの
共通性を探す
Saturday, October 19, 13
81. 具体的には?
ニューロン
Deep Learning = Learning Hierarchical Representations
YL
MA Ra
神経細胞をまねた
It's deep if it has more than one stage of non-linear feature
transformation
Low-Level
Mid-Level
Feature
High-Level
Feature
Feature
計算素子
Trainable
Classifier
学習
データの
(from LeCun and
Ranzato, ICML
Tutorial ’13)
Saturday, October 19, 13
共通性を探す
82. 具体的には?
Deep Learning = Learning Hierarchical Representations
Y LeCun
MA Ranzato
It's deep if it has more than one stage of non-linear feature
transformation
Low-Level
Feature
ニューロン
Deep Learning = Learning Hierarchical Representations
Mid-Level
High-Level
Trainable
Feature
Feature
Classifier
It's deep if it has more than one stage of non-linear feature
transformation
神経細胞をまねた
Low-Level
Mid-Level
Feature
High-Level
Feature
Feature
計算素子
Trainable
Classifier
学習
データの
(from LeCun and
Ranzato, ICML
Tutorial ’13)
共通性を探す
Feature visualization of convolutional net trained on ImageNet from [Zeiler & Fergus 2013]
Saturday, October 19, 13
YL
MA Ra
83. g = Learning Hierarchical Representations
Y LeCun
MA Ranzato
具体的には?
s more than one stage of non-linear feature
ow-Level
Feature
Deep Learning = Learning Hierarchical Representations
Mid-Level
High-Level
Trainable
Y LeCun
MA Ranzato
Feature
Feature
Classifier
It's deep if it has more than one stage of non-linear feature
transformation
Low-Level
Feature
ニューロン
Deep Learning = Learning Hierarchical Representations
Mid-Level
High-Level
Trainable
Feature
Feature
Classifier
It's deep if it has more than one stage of non-linear feature
transformation
神経細胞をまねた
Low-Level
Mid-Level
Feature
High-Level
Feature
Feature
計算素子
Trainable
Classifier
学習
onvolutional net trained on ImageNet from [Zeiler & Fergus 2013]
(from LeCun and
Ranzato, ICML
Tutorial ’13)
データの
共通性を探す
Feature visualization of convolutional net trained on ImageNet from [Zeiler & Fergus 2013]
Saturday, October 19, 13
YL
MA Ra
84. 例:画像認識
IMAGENET
画像データベース
Our model
●
●
2012年のコンテストで
Max-pooling layers follow first, second, and
fifth convolutional layers
Deep Learningを使った
The number of neurons in each layer is given
チームが快勝
by 253440, 186624, 64896, 64896, 43264,
4096, 4096, 1000
(Krizhevsky et al., 2012)
Saturday, October 19, 13
87. 参考
Bengio, 2012 ICML Tutorial
LeCun and Ranzato, 2013 ICML
Tutorial
A Fast Learning Algorithm for
Deep Belief Nets, Hinton et al.,
2006
人工知能学会誌 連載解説 Deep
Learning
The Future of Robotics and AI,
Ng, Youtube
Saturday, October 19, 13