Submit Search
Upload
Pattern Recognition and Machine Learning: Section 3.3
•
6 likes
•
2,739 views
Yusuke Oda
Follow
『パターン認識と機械学習』の輪講で用いた資料。
Read less
Read more
Education
Technology
Report
Share
Report
Share
1 of 38
Download now
Download to read offline
Recommended
Bayesian Neural Networks : Survey
Bayesian Neural Networks : Survey
tmtm otm
はじめてのパターン認識 第5章 k最近傍法(k_nn法)
はじめてのパターン認識 第5章 k最近傍法(k_nn法)
Motoya Wakiyama
変分ベイズ法の説明
変分ベイズ法の説明
Haruka Ozaki
ベイズファクターとモデル選択
ベイズファクターとモデル選択
kazutantan
WAICとWBICのご紹介
WAICとWBICのご紹介
Tomoki Matsumoto
One Class SVMを用いた異常値検知
One Class SVMを用いた異常値検知
Yuto Mori
PRMLの線形回帰モデル(線形基底関数モデル)
PRMLの線形回帰モデル(線形基底関数モデル)
Yasunori Ozaki
Chapter9 一歩進んだ文法(前半)
Chapter9 一歩進んだ文法(前半)
itoyan110
Recommended
Bayesian Neural Networks : Survey
Bayesian Neural Networks : Survey
tmtm otm
はじめてのパターン認識 第5章 k最近傍法(k_nn法)
はじめてのパターン認識 第5章 k最近傍法(k_nn法)
Motoya Wakiyama
変分ベイズ法の説明
変分ベイズ法の説明
Haruka Ozaki
ベイズファクターとモデル選択
ベイズファクターとモデル選択
kazutantan
WAICとWBICのご紹介
WAICとWBICのご紹介
Tomoki Matsumoto
One Class SVMを用いた異常値検知
One Class SVMを用いた異常値検知
Yuto Mori
PRMLの線形回帰モデル(線形基底関数モデル)
PRMLの線形回帰モデル(線形基底関数モデル)
Yasunori Ozaki
Chapter9 一歩進んだ文法(前半)
Chapter9 一歩進んだ文法(前半)
itoyan110
異常検知と変化検知 9章 部分空間法による変化点検知
異常検知と変化検知 9章 部分空間法による変化点検知
hagino 3000
星野「調査観察データの統計科学」第3章
星野「調査観察データの統計科学」第3章
Shuyo Nakatani
グラフィカル Lasso を用いた異常検知
グラフィカル Lasso を用いた異常検知
Yuya Takashina
Deeplearning輪読会
Deeplearning輪読会
正志 坪坂
変分推論と Normalizing Flow
変分推論と Normalizing Flow
Akihiro Nitta
StanとRでベイズ統計モデリング読書会(Osaka.stan) 第6章
StanとRでベイズ統計モデリング読書会(Osaka.stan) 第6章
Shushi Namba
機械学習による統計的実験計画(ベイズ最適化を中心に)
機械学習による統計的実験計画(ベイズ最適化を中心に)
Kota Matsui
階層ベイズによるワンToワンマーケティング入門
階層ベイズによるワンToワンマーケティング入門
shima o
階層モデルの分散パラメータの事前分布について
階層モデルの分散パラメータの事前分布について
hoxo_m
ベイズ統計学の概論的紹介
ベイズ統計学の概論的紹介
Naoki Hayashi
生成モデルの Deep Learning
生成モデルの Deep Learning
Seiya Tokui
「統計的学習理論」第1章
「統計的学習理論」第1章
Kota Matsui
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
RyuichiKanoh
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
Satoshi Hara
Prml14 5
Prml14 5
正志 坪坂
Active Learning 入門
Active Learning 入門
Shuyo Nakatani
敵対的学習に対するラデマッハ複雑度
敵対的学習に対するラデマッハ複雑度
Masa Kato
[DL輪読会]Deep Learning 第5章 機械学習の基礎
[DL輪読会]Deep Learning 第5章 機械学習の基礎
Deep Learning JP
数学で解き明かす深層学習の原理
数学で解き明かす深層学習の原理
Taiji Suzuki
PCAの最終形態GPLVMの解説
PCAの最終形態GPLVMの解説
弘毅 露崎
Neural Machine Translation via Binary Code Prediction
Neural Machine Translation via Binary Code Prediction
Yusuke Oda
Learning Continuous Control Policies by Stochastic Value Gradients
Learning Continuous Control Policies by Stochastic Value Gradients
mooopan
More Related Content
What's hot
異常検知と変化検知 9章 部分空間法による変化点検知
異常検知と変化検知 9章 部分空間法による変化点検知
hagino 3000
星野「調査観察データの統計科学」第3章
星野「調査観察データの統計科学」第3章
Shuyo Nakatani
グラフィカル Lasso を用いた異常検知
グラフィカル Lasso を用いた異常検知
Yuya Takashina
Deeplearning輪読会
Deeplearning輪読会
正志 坪坂
変分推論と Normalizing Flow
変分推論と Normalizing Flow
Akihiro Nitta
StanとRでベイズ統計モデリング読書会(Osaka.stan) 第6章
StanとRでベイズ統計モデリング読書会(Osaka.stan) 第6章
Shushi Namba
機械学習による統計的実験計画(ベイズ最適化を中心に)
機械学習による統計的実験計画(ベイズ最適化を中心に)
Kota Matsui
階層ベイズによるワンToワンマーケティング入門
階層ベイズによるワンToワンマーケティング入門
shima o
階層モデルの分散パラメータの事前分布について
階層モデルの分散パラメータの事前分布について
hoxo_m
ベイズ統計学の概論的紹介
ベイズ統計学の概論的紹介
Naoki Hayashi
生成モデルの Deep Learning
生成モデルの Deep Learning
Seiya Tokui
「統計的学習理論」第1章
「統計的学習理論」第1章
Kota Matsui
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
RyuichiKanoh
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
Satoshi Hara
Prml14 5
Prml14 5
正志 坪坂
Active Learning 入門
Active Learning 入門
Shuyo Nakatani
敵対的学習に対するラデマッハ複雑度
敵対的学習に対するラデマッハ複雑度
Masa Kato
[DL輪読会]Deep Learning 第5章 機械学習の基礎
[DL輪読会]Deep Learning 第5章 機械学習の基礎
Deep Learning JP
数学で解き明かす深層学習の原理
数学で解き明かす深層学習の原理
Taiji Suzuki
PCAの最終形態GPLVMの解説
PCAの最終形態GPLVMの解説
弘毅 露崎
What's hot
(20)
異常検知と変化検知 9章 部分空間法による変化点検知
異常検知と変化検知 9章 部分空間法による変化点検知
星野「調査観察データの統計科学」第3章
星野「調査観察データの統計科学」第3章
グラフィカル Lasso を用いた異常検知
グラフィカル Lasso を用いた異常検知
Deeplearning輪読会
Deeplearning輪読会
変分推論と Normalizing Flow
変分推論と Normalizing Flow
StanとRでベイズ統計モデリング読書会(Osaka.stan) 第6章
StanとRでベイズ統計モデリング読書会(Osaka.stan) 第6章
機械学習による統計的実験計画(ベイズ最適化を中心に)
機械学習による統計的実験計画(ベイズ最適化を中心に)
階層ベイズによるワンToワンマーケティング入門
階層ベイズによるワンToワンマーケティング入門
階層モデルの分散パラメータの事前分布について
階層モデルの分散パラメータの事前分布について
ベイズ統計学の概論的紹介
ベイズ統計学の概論的紹介
生成モデルの Deep Learning
生成モデルの Deep Learning
「統計的学習理論」第1章
「統計的学習理論」第1章
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
Prml14 5
Prml14 5
Active Learning 入門
Active Learning 入門
敵対的学習に対するラデマッハ複雑度
敵対的学習に対するラデマッハ複雑度
[DL輪読会]Deep Learning 第5章 機械学習の基礎
[DL輪読会]Deep Learning 第5章 機械学習の基礎
数学で解き明かす深層学習の原理
数学で解き明かす深層学習の原理
PCAの最終形態GPLVMの解説
PCAの最終形態GPLVMの解説
Viewers also liked
Neural Machine Translation via Binary Code Prediction
Neural Machine Translation via Binary Code Prediction
Yusuke Oda
Learning Continuous Control Policies by Stochastic Value Gradients
Learning Continuous Control Policies by Stochastic Value Gradients
mooopan
Center loss for Face Recognition
Center loss for Face Recognition
Jisung Kim
Muzammil Abdulrahman PPT On Gabor Wavelet Transform (GWT) Based Facial Expres...
Muzammil Abdulrahman PPT On Gabor Wavelet Transform (GWT) Based Facial Expres...
Petroleum Training Institute
Pattern Recognition and Machine Learning : Graphical Models
Pattern Recognition and Machine Learning : Graphical Models
butest
DIY Deep Learning with Caffe Workshop
DIY Deep Learning with Caffe Workshop
odsc
Face Recognition Based on Deep Learning (Yurii Pashchenko Technology Stream)
Face Recognition Based on Deep Learning (Yurii Pashchenko Technology Stream)
IT Arena
портфоліо Бабич О.А.
портфоліо Бабич О.А.
Сергей Жулавник
Caffe - A deep learning framework (Ramin Fahimi)
Caffe - A deep learning framework (Ramin Fahimi)
irpycon
[AI07] Revolutionizing Image Processing with Cognitive Toolkit
[AI07] Revolutionizing Image Processing with Cognitive Toolkit
de:code 2017
Caffe framework tutorial2
Caffe framework tutorial2
Park Chunduck
Processor, Compiler and Python Programming Language
Processor, Compiler and Python Programming Language
arumdapta98
Semi fragile watermarking
Semi fragile watermarking
Yash Diwakar
Using Gradient Descent for Optimization and Learning
Using Gradient Descent for Optimization and Learning
Dr. Volkan OBAN
Face recognition and deep learning โดย ดร. สรรพฤทธิ์ มฤคทัต NECTEC
Face recognition and deep learning โดย ดร. สรรพฤทธิ์ มฤคทัต NECTEC
BAINIDA
Caffe framework tutorial
Caffe framework tutorial
Park Chunduck
Structure Learning of Bayesian Networks with p Nodes from n Samples when n<...
Structure Learning of Bayesian Networks with p Nodes from n Samples when n<...
Joe Suzuki
Facebook Deep face
Facebook Deep face
Emanuele Santellani
Optimization in deep learning
Optimization in deep learning
Jeremy Nixon
Computer vision, machine, and deep learning
Computer vision, machine, and deep learning
Igi Ardiyanto
Viewers also liked
(20)
Neural Machine Translation via Binary Code Prediction
Neural Machine Translation via Binary Code Prediction
Learning Continuous Control Policies by Stochastic Value Gradients
Learning Continuous Control Policies by Stochastic Value Gradients
Center loss for Face Recognition
Center loss for Face Recognition
Muzammil Abdulrahman PPT On Gabor Wavelet Transform (GWT) Based Facial Expres...
Muzammil Abdulrahman PPT On Gabor Wavelet Transform (GWT) Based Facial Expres...
Pattern Recognition and Machine Learning : Graphical Models
Pattern Recognition and Machine Learning : Graphical Models
DIY Deep Learning with Caffe Workshop
DIY Deep Learning with Caffe Workshop
Face Recognition Based on Deep Learning (Yurii Pashchenko Technology Stream)
Face Recognition Based on Deep Learning (Yurii Pashchenko Technology Stream)
портфоліо Бабич О.А.
портфоліо Бабич О.А.
Caffe - A deep learning framework (Ramin Fahimi)
Caffe - A deep learning framework (Ramin Fahimi)
[AI07] Revolutionizing Image Processing with Cognitive Toolkit
[AI07] Revolutionizing Image Processing with Cognitive Toolkit
Caffe framework tutorial2
Caffe framework tutorial2
Processor, Compiler and Python Programming Language
Processor, Compiler and Python Programming Language
Semi fragile watermarking
Semi fragile watermarking
Using Gradient Descent for Optimization and Learning
Using Gradient Descent for Optimization and Learning
Face recognition and deep learning โดย ดร. สรรพฤทธิ์ มฤคทัต NECTEC
Face recognition and deep learning โดย ดร. สรรพฤทธิ์ มฤคทัต NECTEC
Caffe framework tutorial
Caffe framework tutorial
Structure Learning of Bayesian Networks with p Nodes from n Samples when n<...
Structure Learning of Bayesian Networks with p Nodes from n Samples when n<...
Facebook Deep face
Facebook Deep face
Optimization in deep learning
Optimization in deep learning
Computer vision, machine, and deep learning
Computer vision, machine, and deep learning
Similar to Pattern Recognition and Machine Learning: Section 3.3
Relevance Vector Machines for Earthquake Response Spectra
Relevance Vector Machines for Earthquake Response Spectra
drboon
Relevance Vector Machines for Earthquake Response Spectra
Relevance Vector Machines for Earthquake Response Spectra
drboon
PRML Chapter 9
PRML Chapter 9
Sunwoo Kim
Statistical Clustering
Statistical Clustering
tim_hare
DUAL POLYNOMIAL THRESHOLDING FOR TRANSFORM DENOISING IN APPLICATION TO LOCAL ...
DUAL POLYNOMIAL THRESHOLDING FOR TRANSFORM DENOISING IN APPLICATION TO LOCAL ...
ijma
Paper 6 (azam zaka)
Paper 6 (azam zaka)
Nadeem Shafique Butt
Optimum capacity allocation of distributed generation units using parallel ps...
Optimum capacity allocation of distributed generation units using parallel ps...
eSAT Journals
A modified pso based graph cut algorithm for the selection of optimal regular...
A modified pso based graph cut algorithm for the selection of optimal regular...
IAEME Publication
linkd
linkd
Aprameyo Roy
Genetic algorithms
Genetic algorithms
swapnac12
40120140507002
40120140507002
IAEME Publication
40120140507002
40120140507002
IAEME Publication
H235055
H235055
inventionjournals
Measuring Robustness on Generalized Gaussian Distribution
Measuring Robustness on Generalized Gaussian Distribution
IJERA Editor
FEM 10 Common Errors.ppt
FEM 10 Common Errors.ppt
Praveen Kumar
Abrigo and love_2015_
Abrigo and love_2015_
Murtaza Khan
Evaluating competing predictive distributions
Evaluating competing predictive distributions
Andreas Collett
level set method
level set method
Collin Jasnoch
Instance based learning
Instance based learning
swapnac12
Fool me twice
Fool me twice
Vishesh Gupta
Similar to Pattern Recognition and Machine Learning: Section 3.3
(20)
Relevance Vector Machines for Earthquake Response Spectra
Relevance Vector Machines for Earthquake Response Spectra
Relevance Vector Machines for Earthquake Response Spectra
Relevance Vector Machines for Earthquake Response Spectra
PRML Chapter 9
PRML Chapter 9
Statistical Clustering
Statistical Clustering
DUAL POLYNOMIAL THRESHOLDING FOR TRANSFORM DENOISING IN APPLICATION TO LOCAL ...
DUAL POLYNOMIAL THRESHOLDING FOR TRANSFORM DENOISING IN APPLICATION TO LOCAL ...
Paper 6 (azam zaka)
Paper 6 (azam zaka)
Optimum capacity allocation of distributed generation units using parallel ps...
Optimum capacity allocation of distributed generation units using parallel ps...
A modified pso based graph cut algorithm for the selection of optimal regular...
A modified pso based graph cut algorithm for the selection of optimal regular...
linkd
linkd
Genetic algorithms
Genetic algorithms
40120140507002
40120140507002
40120140507002
40120140507002
H235055
H235055
Measuring Robustness on Generalized Gaussian Distribution
Measuring Robustness on Generalized Gaussian Distribution
FEM 10 Common Errors.ppt
FEM 10 Common Errors.ppt
Abrigo and love_2015_
Abrigo and love_2015_
Evaluating competing predictive distributions
Evaluating competing predictive distributions
level set method
level set method
Instance based learning
Instance based learning
Fool me twice
Fool me twice
More from Yusuke Oda
primitiv: Neural Network Toolkit
primitiv: Neural Network Toolkit
Yusuke Oda
ChainerによるRNN翻訳モデルの実装+@
ChainerによるRNN翻訳モデルの実装+@
Yusuke Oda
複数の事前並べ替え候補を用いた句に基づく統計的機械翻訳
複数の事前並べ替え候補を用いた句に基づく統計的機械翻訳
Yusuke Oda
Encoder-decoder 翻訳 (TISハンズオン資料)
Encoder-decoder 翻訳 (TISハンズオン資料)
Yusuke Oda
Learning to Generate Pseudo-code from Source Code using Statistical Machine T...
Learning to Generate Pseudo-code from Source Code using Statistical Machine T...
Yusuke Oda
A Chainer MeetUp Talk
A Chainer MeetUp Talk
Yusuke Oda
PCFG構文解析法
PCFG構文解析法
Yusuke Oda
Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic ...
Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic ...
Yusuke Oda
ACL Reading @NAIST: Fast and Robust Neural Network Joint Model for Statistica...
ACL Reading @NAIST: Fast and Robust Neural Network Joint Model for Statistica...
Yusuke Oda
Tree-based Translation Models (『機械翻訳』§6.2-6.3)
Tree-based Translation Models (『機械翻訳』§6.2-6.3)
Yusuke Oda
翻訳精度の最大化による同時音声翻訳のための文分割法 (NLP2014)
翻訳精度の最大化による同時音声翻訳のための文分割法 (NLP2014)
Yusuke Oda
Test
Test
Yusuke Oda
More from Yusuke Oda
(12)
primitiv: Neural Network Toolkit
primitiv: Neural Network Toolkit
ChainerによるRNN翻訳モデルの実装+@
ChainerによるRNN翻訳モデルの実装+@
複数の事前並べ替え候補を用いた句に基づく統計的機械翻訳
複数の事前並べ替え候補を用いた句に基づく統計的機械翻訳
Encoder-decoder 翻訳 (TISハンズオン資料)
Encoder-decoder 翻訳 (TISハンズオン資料)
Learning to Generate Pseudo-code from Source Code using Statistical Machine T...
Learning to Generate Pseudo-code from Source Code using Statistical Machine T...
A Chainer MeetUp Talk
A Chainer MeetUp Talk
PCFG構文解析法
PCFG構文解析法
Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic ...
Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic ...
ACL Reading @NAIST: Fast and Robust Neural Network Joint Model for Statistica...
ACL Reading @NAIST: Fast and Robust Neural Network Joint Model for Statistica...
Tree-based Translation Models (『機械翻訳』§6.2-6.3)
Tree-based Translation Models (『機械翻訳』§6.2-6.3)
翻訳精度の最大化による同時音声翻訳のための文分割法 (NLP2014)
翻訳精度の最大化による同時音声翻訳のための文分割法 (NLP2014)
Test
Test
Recently uploaded
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
christianmathematics
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
Jisc
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
Celine George
Understanding Accommodations and Modifications
Understanding Accommodations and Modifications
MJDuyan
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Dr Vijay Vishwakarma
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
Dr. Sarita Anand
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17
Celine George
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
Pooja Bhuva
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
Celine George
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
Dr. Ravikiran H M Gowda
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
Single or Multiple melodic lines structure
Single or Multiple melodic lines structure
dhanjurrannsibayan2
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
jbellavia9
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
Jisc
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
MaritesTamaniVerdade
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Pooja Bhuva
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
camerronhm
Application orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
RamjanShidvankar
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Pooja Bhuva
Recently uploaded
(20)
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
Understanding Accommodations and Modifications
Understanding Accommodations and Modifications
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
Single or Multiple melodic lines structure
Single or Multiple melodic lines structure
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
Application orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Pattern Recognition and Machine Learning: Section 3.3
1.
Reading Pattern Recognition and Machine
Learning §3.3 (Bayesian Linear Regression) Christopher M. Bishop Introduced by: Yusuke Oda (NAIST) @odashi_t 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 1
2.
Agenda 3.3 Bayesian
Linear Regression ベイズ線形回帰 – 3.3.1 Parameter distribution パラメータの分布 – 3.3.2 Predictive distribution 予測分布 – 3.3.3 Equivalent kernel 等価カーネル 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 2
3.
Agenda 3.3 Bayesian
Linear Regression ベイズ線形回帰 – 3.3.1 Parameter distribution パラメータの分布 – 3.3.2 Predictive distribution 予測分布 – 3.3.3 Equivalent kernel 等価カーネル 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 3
4.
Bayesian Linear Regression
Maximum Likelihood (ML) – The number of basis functions (≃ model complexity) depends on the size of the data set. – Adds the regularization term to control model complexity. – How should we determine the coefficient of regularization term? 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 4
5.
Bayesian Linear Regression
Maximum Likelihood (ML) – Using ML to determine the coefficient of regularization term ... Bad selection • This always leads to excessively complex models (= over-fitting) – Using independent hold-out data to determine model complexity (See §1.3) ... Computationally expensive ... Wasteful of valuable data 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 5 In the case of previous slide, λ always becomes 0 when using ML to determine λ.
6.
Bayesian Linear Regression
Bayesian treatment of linear regression – Avoids the over-fitting problem of ML. – Leads to automatic methods of determining model complexity using the training data alone. What we do? – Introduces the prior distribution and likelihood . • Assumes the model parameter as proberbility function. – Calculates the posterior distribution using the Bayes' theorem: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 6
7.
Agenda 3.3 Bayesian
Linear Regression ベイズ線形回帰 – 3.3.1 Parameter distribution パラメータの分布 – 3.3.2 Predictive distribution 予測分布 – 3.3.3 Equivalent kernel 等価カーネル 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 7
8.
Note: Marginal /
Conditional Gaussians Marginal Gaussian distribution for Conditional Gaussian distribution for given Marginal distribution of Conditional distribution of given 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 8 Given: Then: where
9.
Parameter Distribution Remember
the likelihood function given by §3.1.1: – This is the exponential of quadratic function of The corresponding conjugate prior is given by a Gaussian distribution: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 9 known parameter
10.
Parameter Distribution Now
given: Then the posterior distribution is shown by using (2.116): where 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 10
11.
Online Learning- Parameter
Distribution If data points arrive sequentially, the design matrix has only 1 row: Assuming that are the n-th input data then we can obtain the formula for online learning: where In addition, 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 11
12.
Easy Gaussian Prior-
Parameter Distribution If the prior distribution is a zero-mean isotropic Gaussian governed by a single precision parameter : The corresponding posterior distribution is also given: where 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 12
13.
Relationship with MSSE-
Parameter Distribution The log of the posterior distribution is given: If prior distribution is given by (3.52), this result is shown: – Maximization of (3.55) with respect to – Minimization of the sum-of-squares error (MSSE) function with the addition of a quadratic regularization term 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 13 Equivalent
14.
Example- Parameter Distribution
Straight-line fitting – Model function: – True function: – Error: – Goal: To recover the values of from such data – Prior distribution: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 14
15.
Generalized Gaussian Prior-
Parameter Distribution We can generalize the Gaussian prior about exponent. In which corresponds to the Gaussian and only in the case is the prior conjugate to the (3.10). 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 15
16.
Agenda 3.3 Bayesian
Linear Regression ベイズ線形回帰 – 3.3.1 Parameter distribution パラメータの分布 – 3.3.2 Predictive distribution 予測分布 – 3.3.3 Equivalent kernel 等価カーネル 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 16
17.
Predictive Distribution 2013/6/5 2013
© Yusuke Oda AHC-Lab, IS, NAIST 17 Let's consider that making predictions of directly for new values of . In order to obtain it, we need to evaluate the predictive distribution: This formula is tipically written: Marginalization arround (summing out )
18.
Predictive Distribution The
conditional distribution of the target variable is given: And the posterior weight distribution is given: Accordingly, the result of (3.57) is shown by using (2.115): where 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 18
19.
Predictive Distribution Now
we discuss the variance of predictive distribution: – As additional data points are observed, the posterior distribution becomes narrower: – 2nd term of the(3.59) goes zero in the limit : 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 19 Addictive noise goverened by the parameter . This term depends on the mapping vector . of each data point .
20.
Predictive Distribution 2013/6/5 2013
© Yusuke Oda AHC-Lab, IS, NAIST 20
21.
Example- Predictive Distribution
Gaussian regression with sine curve – Basis functions: 9 Gaussian curves 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 21 Mean of predictive distribution Standard deviation of predictive distribution
22.
Example- Predictive Distribution
Gaussian regression with sine curve 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 22
23.
Example- Predictive Distribution
Gaussian regression with sine curve 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 23
24.
Problem of Localized
Basis- Predictive Distribution Polynominal regression Gaussian regression 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 24 Which is better?
25.
Problem of Localized
Basis- Predictive Distribution If we used localized basis function such as Gaussians, then in regions away from the basis function centers the contribution from the 2nd term in the (3.59) will goes zero. Accordingly, the predictive variance becomes only the noise contribution . But it is not good result. 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 25 Large contribution Small contribution
26.
Problem of Localized
Basis- Predictive Distribution This problem (arising from choosing localized basis function) can be avoided by adopting an alternative Bayesian approach to regression known as a Gaussian process. – See §6.4. 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 26
27.
Case of Unknown
Precision- Predictive Distribution If both and are treated as unknown then we can introduce a conjugate prior distribution and corresponding posterior distribution as Gaussian-gamma distribution: And then the predictive distribution is given: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 27
28.
Agenda 3.3 Bayesian
Linear Regression ベイズ線形回帰 – 3.3.1 Parameter distribution パラメータの分布 – 3.3.2 Predictive distribution 予測分布 – 3.3.3 Equivalent kernel 等価カーネル 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 28
29.
Equivalent Kernel 2013/6/5 2013
© Yusuke Oda AHC-Lab, IS, NAIST 29 If we substitute the posterior mean solution (3.53) into the expression (3.3), the predictive mean can be written: This formula can assume the linear combination of :
30.
Equivalent Kernel Where
the coefficients of each are given: This function is calld smoother matrix or equivalent kernel. Regression functions which make predictions by taking linear combinations of the training set target values are known as linear smoothers. We also predict for new input vector using equivalent kernel, instead of calculating parameters of basis functions. 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 30
31.
Example 1- Equivalent
Kernel Equivalent kernel with Gaussian regression Equivalen kernel depends on the set of basis function and the data set. 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 31
32.
Equivalent Kernel Equivalent
kernel means the contribution of each data point for predictive mean. The covariance between and can be shown by equivalent kernel: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 32 Large contribution Small contribution
33.
Properties of Equivalent
Kernel- Equivalent Kernel Equivalent kernel have localization property even if any basis functions are not localized. Sum of equivalent kernel equals 1 for all : 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 33 Polynominal Sigmoid
34.
Example 2- Equivalent
Kernel Equivalent kernel with polynominal regression – Moving parameter: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 34
35.
Example 2- Equivalent
Kernel Equivalent kernel with polynominal regression – Moving parameter: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 35
36.
Example 2- Equivalent
Kernel Equivalent kernel with polynominal regression – Moving parameter: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 36
37.
Properties of Equivalent
Kernel- Equivalent Kernel Equivalent kernel satisfies an important property shared by kernel functions in general: – Kernel function can be expressed in the form of an inner product with respect to a vector of nonlinear functions: – In the case of equivalent kernel, is given below: 2013/6/5 2013 © Yusuke Oda AHC-Lab, IS, NAIST 37
38.
Thank you! 2013/6/5 2013
© Yusuke Oda AHC-Lab, IS, NAIST 38 zzz...
Download now