SlideShare ist ein Scribd-Unternehmen logo
1 von 35
Let there be color!: Joint End-to-end Learning of
Global and Local Image Priors for Automatic Image
Colorization with Simultaneous Classification
GCI2期生 篠原義明
ディープネットワークを用いた大域特徴と局所
特徴の学習による
白黒写真の自動色付け
GCI2期生 篠原義明
Contents
•Abstract
•Model
•Experimetal Results & Discussions
•Additional
Why this paper?
• 選定理由:
• 古い白黒写真に色をつけることにもともと興味があった。
• 著者:飯塚里志* シモセラ エドガー* 石川博(早稲田大学)
• SIGGRAPH2016で発表?
Purpose
白黒の画像を彩色する。
Related Works
• ユーザの入力と試行錯誤でに依存するモデル([Xu et al. 2013],
[Chen et al. 2012])
• 入力に対して参考となる画像をユーザが選択する必要があるモ
デル([Gupta et al. 2012], [Charpiat et al. 2008])
• [Liu et al. 2008]はweb検索を利用しているがユーザはクエリを入力する
必要がある
• 最新のもの[Cheng et al. 2015]はtrainingが小さいかつ性能の高い
セグメンテーションモデルを要求するため、セグメンテーショ
ンクラスが現れない画像に対しては性能が低い
提案手法はend-to-end
Feature of Model
・大域特徴(状況)と局所特徴(状況を所与としたテクスチャや
物体)を結合させて利用
・画像の色とラベルを用いて大域的特徴を効率的に学習
・解像度に関係なく利用可能
・ユーザに依る介入(パラメータ調整など)が不要
・end-to-endに学習が可能
・Style transferが可能
・評価はユーザテストに依った
Contents
•Abstract
•Model
•Experimetal Results & Discussions
•Additional
Model Structure
• 低レベル特徴ネットワーク
• 中レベル特徴ネットワーク
• 大域特徴ネットワーク
• 色付けネットワーク
で構成される。
白黒画像を入力→1/2サイズの彩度画像を出力
彩度画像を2倍にスケーリングし,入力画像と統合してカラー画
像を生成する。
Low-Level Features Network(FCN)
・Max poolingの代わりにストライドを広げたConv. Layerを使用
・1x1 paddingでサイズを維持
・活性化関数はSigmoid
224
x224
Global Features Network
・Low-level features networkの入力は224x224である必要がある。
Mid-Level Features Network(FCN)
・出力はw/8 x h/8 x 256
512
256
Fusion Layer
各座標(u, v)毎に
256x1
256x1
256x512
256x1
Y^fusion = W/8 x H/8 x 256
Colorization Network
• CIE L*a*b*色空間(明度*補色*補色)
• a*, b*は[0,1]に正規化
• アウトプットをx2にアップサンプル
し、MSEを用いてBack prop.
Classification network
• 画像のグローバルな情報を学習しないため明らかな間違えが起きる。
これを防ぐためにコンテクスト
を判定させる。
N =205
512
相対的な重み
1/300 for training
Frobenius norm
誤差関数:
Learning
• Places Scene Dataset[Zhou et al. 2014]を244x244に前処理
• 状況のクラスは205個
• 256x256にリサイズし、ランダムにクロッピングと左右反転
• ネットワーク全体でBatch normalization
• ADADELTAで最適化
• 128batch for 200,000 iter.
Contents
•Abstract
•Model
•Experimetal Results & Discussions
•Additional
Colorisation Results(再掲)
Evaluation
• 最新の手法とベースラインモデルと比較
人の目に依る評価
• ベースラインモデル
Conv.
Comparison with State of the Art
レンガの色岩と海の色遠景の山の色
User Study
224x224の画像を被験者に見せて
自然かどうかを聞いた。
Do we need Global Features?
Features Without Global Features
前頁の結果はこの様な画像に
依るものと考えられる。
Style Transfer
・Global featureへの入力元画像を変えることで
Style transferを実現。
・入力にはグレースケール画像のみを使用しても
Style transferが出来ている。
Dawn Dusk
Spring Fall Fall
Daytime
Colorizing The Past
古い白黒画像を入力すると加えられた修正や輪郭に関わらず・・・
Classification
・グレースケール画像を与えた時の分類精度を先行研究と比較
・分類特化の手法と比べても高精度
Color Space Selection
RGBとYUVとL*a*b*色空間の3つを試した
上記画像ではどれもかなり似ているがより難しいタスクにおいては
L*a*b*色空間が最もそれらしかった。
LImitations
・当然学習に使われていないタイプの画像には対応できない
・Style Tranferは類似のSemantic Levelのものでないと良い結果は出ない。
・そもそも色付は本質的に曖昧な問題(Global featureを設定する以外ユーザは操作出来ない)
このようなものは白黒画像だけからでは
人間もわからない
Contents
•Abstract
•Model
•Experimetal Results & Discussions
•Additional
Application
Application
Application
Next
http://hi.cs.waseda.ac.jp/~esimo/ja/research/sketch/
Appeindex
Computation Time
リアルタイムに近い計算が可能

Weitere ähnliche Inhalte

Was ist angesagt?

[DL輪読会]NVAE: A Deep Hierarchical Variational Autoencoder
[DL輪読会]NVAE: A Deep Hierarchical Variational Autoencoder[DL輪読会]NVAE: A Deep Hierarchical Variational Autoencoder
[DL輪読会]NVAE: A Deep Hierarchical Variational AutoencoderDeep Learning JP
 
【論文読み会】Self-Attention Generative Adversarial Networks
【論文読み会】Self-Attention Generative  Adversarial Networks【論文読み会】Self-Attention Generative  Adversarial Networks
【論文読み会】Self-Attention Generative Adversarial NetworksARISE analytics
 
SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用
SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用
SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用SSII
 
GAN(と強化学習との関係)
GAN(と強化学習との関係)GAN(と強化学習との関係)
GAN(と強化学習との関係)Masahiro Suzuki
 
SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜
SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜
SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜SSII
 
【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Modelscvpaper. challenge
 
[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networks[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networksDeep Learning JP
 
【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...
【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...
【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...Deep Learning JP
 
CycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーション
CycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーションCycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーション
CycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーション奈良先端大 情報科学研究科
 
自己教師学習(Self-Supervised Learning)
自己教師学習(Self-Supervised Learning)自己教師学習(Self-Supervised Learning)
自己教師学習(Self-Supervised Learning)cvpaper. challenge
 
StyleGAN解説 CVPR2019読み会@DeNA
StyleGAN解説 CVPR2019読み会@DeNAStyleGAN解説 CVPR2019読み会@DeNA
StyleGAN解説 CVPR2019読み会@DeNAKento Doi
 
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...Deep Learning JP
 
画像認識の初歩、SIFT,SURF特徴量
画像認識の初歩、SIFT,SURF特徴量画像認識の初歩、SIFT,SURF特徴量
画像認識の初歩、SIFT,SURF特徴量takaya imai
 
【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN ImageryDeep Learning JP
 
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State SpacesDeep Learning JP
 
【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...
【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...
【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...Deep Learning JP
 
[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted Windows[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted WindowsDeep Learning JP
 
[DLHacks]StyleGANとBigGANのStyle mixing, morphing
[DLHacks]StyleGANとBigGANのStyle mixing, morphing[DLHacks]StyleGANとBigGANのStyle mixing, morphing
[DLHacks]StyleGANとBigGANのStyle mixing, morphingDeep Learning JP
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoderSho Tatsuno
 
[DL輪読会]Glow: Generative Flow with Invertible 1×1 Convolutions
[DL輪読会]Glow: Generative Flow with Invertible 1×1 Convolutions[DL輪読会]Glow: Generative Flow with Invertible 1×1 Convolutions
[DL輪読会]Glow: Generative Flow with Invertible 1×1 ConvolutionsDeep Learning JP
 

Was ist angesagt? (20)

[DL輪読会]NVAE: A Deep Hierarchical Variational Autoencoder
[DL輪読会]NVAE: A Deep Hierarchical Variational Autoencoder[DL輪読会]NVAE: A Deep Hierarchical Variational Autoencoder
[DL輪読会]NVAE: A Deep Hierarchical Variational Autoencoder
 
【論文読み会】Self-Attention Generative Adversarial Networks
【論文読み会】Self-Attention Generative  Adversarial Networks【論文読み会】Self-Attention Generative  Adversarial Networks
【論文読み会】Self-Attention Generative Adversarial Networks
 
SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用
SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用
SSII2021 [OS2-03] 自己教師あり学習における対照学習の基礎と応用
 
GAN(と強化学習との関係)
GAN(と強化学習との関係)GAN(と強化学習との関係)
GAN(と強化学習との関係)
 
SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜
SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜
SSII2022 [SS2] 少ないデータやラベルを効率的に活用する機械学習技術 〜 足りない情報をどのように補うか?〜
 
【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models
 
[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networks[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networks
 
【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...
【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...
【DL輪読会】DiffRF: Rendering-guided 3D Radiance Field Diffusion [N. Muller+ CVPR2...
 
CycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーション
CycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーションCycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーション
CycleGANによる異種モダリティ画像生成を用いた股関節MRIの筋骨格セグメンテーション
 
自己教師学習(Self-Supervised Learning)
自己教師学習(Self-Supervised Learning)自己教師学習(Self-Supervised Learning)
自己教師学習(Self-Supervised Learning)
 
StyleGAN解説 CVPR2019読み会@DeNA
StyleGAN解説 CVPR2019読み会@DeNAStyleGAN解説 CVPR2019読み会@DeNA
StyleGAN解説 CVPR2019読み会@DeNA
 
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
 
画像認識の初歩、SIFT,SURF特徴量
画像認識の初歩、SIFT,SURF特徴量画像認識の初歩、SIFT,SURF特徴量
画像認識の初歩、SIFT,SURF特徴量
 
【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
【DL輪読会】StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
 
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
 
【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...
【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...
【DL輪読会】Visual Classification via Description from Large Language Models (ICLR...
 
[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted Windows[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
[DL輪読会]Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
 
[DLHacks]StyleGANとBigGANのStyle mixing, morphing
[DLHacks]StyleGANとBigGANのStyle mixing, morphing[DLHacks]StyleGANとBigGANのStyle mixing, morphing
[DLHacks]StyleGANとBigGANのStyle mixing, morphing
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder
 
[DL輪読会]Glow: Generative Flow with Invertible 1×1 Convolutions
[DL輪読会]Glow: Generative Flow with Invertible 1×1 Convolutions[DL輪読会]Glow: Generative Flow with Invertible 1×1 Convolutions
[DL輪読会]Glow: Generative Flow with Invertible 1×1 Convolutions
 

Andere mochten auch

[DL Hacks輪読] Learning Physical Intuition of Block Towers by Example
[DL Hacks輪読] Learning Physical Intuition of Block Towers by Example[DL Hacks輪読] Learning Physical Intuition of Block Towers by Example
[DL Hacks輪読] Learning Physical Intuition of Block Towers by Examplekurotaki_weblab
 
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...Deep Learning JP
 
[輪読会]Multilingual Image Description with Neural Sequence Models
[輪読会]Multilingual Image Description with Neural Sequence Models[輪読会]Multilingual Image Description with Neural Sequence Models
[輪読会]Multilingual Image Description with Neural Sequence ModelsDeep Learning JP
 
[DL輪読会]Learning convolutional neural networks for graphs
[DL輪読会]Learning convolutional neural networks for graphs[DL輪読会]Learning convolutional neural networks for graphs
[DL輪読会]Learning convolutional neural networks for graphsDeep Learning JP
 
[DL輪読会]Image-to-Image Translation with Conditional Adversarial Networks
[DL輪読会]Image-to-Image Translation with Conditional Adversarial Networks[DL輪読会]Image-to-Image Translation with Conditional Adversarial Networks
[DL輪読会]Image-to-Image Translation with Conditional Adversarial NetworksDeep Learning JP
 
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKSDeep Learning JP
 
[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...
[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...
[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...Deep Learning JP
 
[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalization[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalizationDeep Learning JP
 
[Dl輪読会]video pixel networks
[Dl輪読会]video pixel networks[Dl輪読会]video pixel networks
[Dl輪読会]video pixel networksDeep Learning JP
 
[DL輪読会]Unsupervised Learning of 3D Structure from Images
[DL輪読会]Unsupervised Learning of 3D Structure from Images[DL輪読会]Unsupervised Learning of 3D Structure from Images
[DL輪読会]Unsupervised Learning of 3D Structure from ImagesDeep Learning JP
 
[Dl輪読会]Censoring Representation with Adversary
[Dl輪読会]Censoring Representation with Adversary[Dl輪読会]Censoring Representation with Adversary
[Dl輪読会]Censoring Representation with AdversaryDeep Learning JP
 
[DL輪読会]Learning What and Where to Draw (NIPS’16)
[DL輪読会]Learning What and Where to Draw (NIPS’16)[DL輪読会]Learning What and Where to Draw (NIPS’16)
[DL輪読会]Learning What and Where to Draw (NIPS’16)Deep Learning JP
 
Trust Region Policy Optimization
Trust Region Policy OptimizationTrust Region Policy Optimization
Trust Region Policy Optimizationmooopan
 
[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODER
[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODER[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODER
[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODERDeep Learning JP
 
[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読Deep Learning JP
 
[Dl輪読会]bayesian dark knowledge
[Dl輪読会]bayesian dark knowledge[Dl輪読会]bayesian dark knowledge
[Dl輪読会]bayesian dark knowledgeDeep Learning JP
 
[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...
[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...
[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...Deep Learning JP
 
[DL輪読会]Regularization with stochastic transformations and perturbations for d...
[DL輪読会]Regularization with stochastic transformations and perturbations for d...[DL輪読会]Regularization with stochastic transformations and perturbations for d...
[DL輪読会]Regularization with stochastic transformations and perturbations for d...Deep Learning JP
 
[DL輪読会]最新の深層強化学習
[DL輪読会]最新の深層強化学習[DL輪読会]最新の深層強化学習
[DL輪読会]最新の深層強化学習Deep Learning JP
 

Andere mochten auch (20)

[DL Hacks輪読] Learning Physical Intuition of Block Towers by Example
[DL Hacks輪読] Learning Physical Intuition of Block Towers by Example[DL Hacks輪読] Learning Physical Intuition of Block Towers by Example
[DL Hacks輪読] Learning Physical Intuition of Block Towers by Example
 
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
 
[輪読会]Multilingual Image Description with Neural Sequence Models
[輪読会]Multilingual Image Description with Neural Sequence Models[輪読会]Multilingual Image Description with Neural Sequence Models
[輪読会]Multilingual Image Description with Neural Sequence Models
 
[DL輪読会]Learning convolutional neural networks for graphs
[DL輪読会]Learning convolutional neural networks for graphs[DL輪読会]Learning convolutional neural networks for graphs
[DL輪読会]Learning convolutional neural networks for graphs
 
[DL輪読会]Image-to-Image Translation with Conditional Adversarial Networks
[DL輪読会]Image-to-Image Translation with Conditional Adversarial Networks[DL輪読会]Image-to-Image Translation with Conditional Adversarial Networks
[DL輪読会]Image-to-Image Translation with Conditional Adversarial Networks
 
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
 
[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...
[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...
[DL輪読会]StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generat...
 
[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalization[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalization
 
[Dl輪読会]video pixel networks
[Dl輪読会]video pixel networks[Dl輪読会]video pixel networks
[Dl輪読会]video pixel networks
 
[DL輪読会]Unsupervised Learning of 3D Structure from Images
[DL輪読会]Unsupervised Learning of 3D Structure from Images[DL輪読会]Unsupervised Learning of 3D Structure from Images
[DL輪読会]Unsupervised Learning of 3D Structure from Images
 
[Dl輪読会]Censoring Representation with Adversary
[Dl輪読会]Censoring Representation with Adversary[Dl輪読会]Censoring Representation with Adversary
[Dl輪読会]Censoring Representation with Adversary
 
[DL輪読会]Learning What and Where to Draw (NIPS’16)
[DL輪読会]Learning What and Where to Draw (NIPS’16)[DL輪読会]Learning What and Where to Draw (NIPS’16)
[DL輪読会]Learning What and Where to Draw (NIPS’16)
 
Trust Region Policy Optimization
Trust Region Policy OptimizationTrust Region Policy Optimization
Trust Region Policy Optimization
 
[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODER
[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODER[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODER
[DL輪読会]TREE-STRUCTURED VARIATIONAL AUTOENCODER
 
[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読
 
[Dl輪読会]bayesian dark knowledge
[Dl輪読会]bayesian dark knowledge[Dl輪読会]bayesian dark knowledge
[Dl輪読会]bayesian dark knowledge
 
Iclr2016 vaeまとめ
Iclr2016 vaeまとめIclr2016 vaeまとめ
Iclr2016 vaeまとめ
 
[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...
[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...
[DL輪読会]Combining Fully Convolutional and Recurrent Neural Networks for 3D Bio...
 
[DL輪読会]Regularization with stochastic transformations and perturbations for d...
[DL輪読会]Regularization with stochastic transformations and perturbations for d...[DL輪読会]Regularization with stochastic transformations and perturbations for d...
[DL輪読会]Regularization with stochastic transformations and perturbations for d...
 
[DL輪読会]最新の深層強化学習
[DL輪読会]最新の深層強化学習[DL輪読会]最新の深層強化学習
[DL輪読会]最新の深層強化学習
 

Mehr von Deep Learning JP

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving PlannersDeep Learning JP
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについてDeep Learning JP
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...Deep Learning JP
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-ResolutionDeep Learning JP
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxivDeep Learning JP
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLMDeep Learning JP
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...Deep Learning JP
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place RecognitionDeep Learning JP
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?Deep Learning JP
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究についてDeep Learning JP
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )Deep Learning JP
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...Deep Learning JP
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"Deep Learning JP
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "Deep Learning JP
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat ModelsDeep Learning JP
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"Deep Learning JP
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...Deep Learning JP
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...Deep Learning JP
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...Deep Learning JP
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...Deep Learning JP
 

Mehr von Deep Learning JP (20)

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
 

[DL輪読会]Let there be color

Hinweis der Redaktion

  1. ローカルな情報のみより性能が上
  2. Maxpoolongをconvに置き換えても性能は変わらない
  3. bとWは学習して決定
  4. アルファを1/300にすると両損失の影響が同じくらいになる
  5. Fully connected layer を使わない、upscaling layerを使う、Sigmoid transfer func.(活性化関数)を最終レイヤーに使う点で従来とは違う手法である
  6. ベースラインも最新手法も遠景の山を彩色出来なかった
  7. なぜstate of the artが比較にないのか不明
  8. 天井が空の色だったり海が地面の色になっている
  9. 例えば人が描いた画像 例えば水族館の特徴を野球場にのせる 色付けのプロセスを最適化する方法はありそうと筆者はしている