11. 11
7,500+ attendees from 59 countries/regions
Korea 2964
China 1264
Japan 260
Singapore 87
Taiwan 80
Vietnam 10
Thailand 4
Philippines 3
Malaysia 2
Indonesia 1
Germany 253
UK 185
France 128
Switzerland 103
Spain 54
Netherlands 54
Italy 44
Sweden 31
Belgium 25
Austria 23
Finland 21
Denmark 12
Estonia 7
Ireland 6
Norway 6
Luxembourg 3
Portugal 2
USA 1199
Canada 167
Mexico 2
Australia 77
New Zealand 2
Israel 106
India 72
Saudi Arabia 21
UAE 14
Pakistan 2
Iran 2
Kazakhstan 1
Oman 1
Russia 103
Czech Rep 19
Turkey 12
Slovenia 9
Croatia 8
Poland 8
Hungary 6
Serbia 6
Armenia 5
Greece 5
Belarus 4
Ukraine 4
BiH 1
Cyprus 1
Lithuania 1
Romania 1 Brazil 7
Algeria 1
Ivory Coast 1
12. 12
Thanks to our 56 sponsors and 72 exhibitors!
PLATINUM
GOLD
SILVER
NON-PROFIT
13. 13
ICCV Program by the Numbers
• Submissions: 4303
• Up 100% from ICCV 2017!
• Over 10K authors!
• Accepted papers: 1075
• Acceptance rate: 25%
• Orals: 200
• Acceptance rate: 4.6%
• All short orals, like CVPR 2019
15. 15
Countries/Regions of Accepted Papers
0
50
100
150
200
250
300
350
400
China
USA
Germany
Korea
UK
Canada
Australia
Switzerland
Singapore
France
Japan
Israel
India
Taiwan
Italy
UAE
Spain
Czech
SaudiArabia
Netherland
Finland
Brazil
Sweden
Austria
Belgium
Russia
Iran
Slovenia
Turkey
Ukraine
Greece
Number of publications per country/region完全に2強…
完全に2強…
17. 17
Best Paper Honorable Mentions
Asynchronous Single-Photon 3D Imaging
Anant Gupta, Atul Ingle, Mohit Gupta
University of Wisconsin-Madison
18. 18
Best Paper Honorable Mentions
Specifying Object Attributes and Relations
in Interactive Scene Generation
Oron Ashual, Lior Wolf
Tel-Aviv University
19. 19
Best Student Paper Award
PLMP - Point-Line Minimal Problems
in Complete Multi-View Visibility
Timothy Duff (Georgia Tech),
Kathlén Kohn (KTH),
Anton Leykin (Georgia Tech),
Tomas Pajdla (Czech Technical University in Prague)
20. 20
Best Paper Award (Marr Prize)
SinGAN: Learning a Generative Model
from a Single Natural Image
Tamar Rott Shaham (Technion),
Tali Dekel (Google),
Tomer Michaeli (Technion)
21. 21
Best Paper Nominations
Larger Norm More Transferable:
An Adaptive Feature Norm Approach
for Unsupervised Domain Adaptation
Ruijia Xu, Guanbin Li, Jihan Yang, Liang Lin
Deep Hough Voting for 3D Object Detection
in Point Clouds
Charles R. Qi, Or Litany, Kaiming He, Leonidas Guibas
Unsupervised Deep Learning
for Structured Shape Matching
Jean-Michel Roufosse, Abhishek Sharma,
Maks Ovsjanikov
Gated2Depth: Real-time Dense Lidar
from Gated Images
Tobias Gruber, Frank Julca-Aguilar, Mario Bijelic,
Felix Heide
Local Aggregation for Unsupervised Learning
of Visual Embeddings
Chengxu Zhuang, Alex Zhai, Daniel Yamins
Habitat: A Platform for Embodied AI Research
Manolis Savva, Abhishek Kadian, Oleksandr Maksymets,
Yili Zhao, Erik Wijmans, Bhavana M Jain, Julian Straub,
Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh,
Dhruv Batra
Robust Change Captioning
Dong Huk Park, Trevor Darrell, Anna Rohrbach
22. 22
Helmholtz Prize
Building Rome in a Day
Sameer Agarwal, Noah Snavely, Ian Simon,
Steven M. Seitz, Richard Szeliski
Helmholtz Prize
Attribute and Simile Classifiers
for Face Verification
Neeraj Kumar, Alexander C. Berg,
Peter N. Belhumeur, Shree K. Nayar
Everingham Prize
Gérard Medioni
for extensive and sustained contributions to CVPR & ICCV conference
organization over several decades, and multiple other services to the
community. He also introduced the unifying passport registration system for
conferences and workshops, and was a co-founder of the Computer Vision
Foundation.
Everingham Prize
Labeled Faces in the Wild (LFW)
Erik Learned-Miller, Gary B. Huang,
Tamara Berg and team
for generating and maintaining the LFW dataset and benchmark, starting from
2007. LFW has helped drive the field towards more uncontrolled and real-world
face recognition.
Azriel Rosenfeld Lifetime Achievement
Award
Shimon Ullman
Distinguished Researcher Award
William T. Freeman
Distinguished Researcher Award
Shree Nayar
There will be short talks by the TC award winners (Shimon/Bill/Shree) at the start of the CVF/PAMI TC
meeting in this evening.
25. Contents
• Robust Face Recognition Models or Face Normalization
• Deep Internal Learning
• The Art of Distillation
25
26. Contents
• Robust Face Recognition Models or Face Normalization
• Deep Internal Learning
• The Art of Distillation
26
27. Robust Face Recognition Models
or Face Normalization
下記、どちらのほうが良いか?
A. 良くない品質の顔にたいしても精度良く顔認識可能なよりロバストなモデルを
つくる
B. 品質の良くない顔画像の品質向上をするモデルをつくって、それで⾼品質化さ
れた顔画像にたいして顔認識モデルを適⽤する
27
28. Robust Face Recognition Models
or Face Normalization
下記、どちらのほうが良いか?
A. 良くない品質の顔にたいしても精度良く顔認識可能なよりロバストなモデルを
つくる
B. 品質の良くない顔画像の品質向上をするモデルをつくって、それで⾼品質化さ
れた顔画像にたいして顔認識モデルを適⽤する
28
29. Robust Face Recognition Models
or Face Normalization
下記、どちらのほうが良いか?
A. 良くない品質の顔にたいしても精度良く顔認識可能なよりロバストなモデルを
つくる
B. 品質の良くない顔画像の品質向上をするモデルをつくって、それで⾼品質化さ
れた顔画像にたいして顔認識モデルを適⽤する
29
→ 次の講演者(中川さん)の発表にご期待ください
30. Robust Face Recognition Models
or Face Normalization
下記、どちらのほうが良いか?
A. 良くない品質の顔にたいしても精度良く顔認識可能なよりロバストなモデルを
つくる
B. 品質の良くない顔画像の品質向上をするモデルをつくって、それで⾼品質化さ
れた顔画像にたいして顔認識モデルを適⽤する
30
31. (Invited Talk) Tackling Person Identification at a
Distance: Pose, Resolution and Gait
Workshop and Challenge on Real-World Recognition from Low-Quality Images and Videos (RLQ)
31
38. Contents
• Robust Face Recognition Models or Face Normalization
• Deep Internal Learning
• The Art of Distillation
38
39. Deep Internal Learning
Advances in Image Manipulation workshop and challenges on image and video manipulation
39http://www.vision.ee.ethz.ch/aim19/
41. Blind Super-Resolution Kernel Estimation using an
Internal-GAN
下記の区別がつかないようにAdversarial Learningすることで
down scaling kernelが未知な状況でも⾼精度な超解像を可能にする。
A. Dist. of patches in the original image
B. Dist. of patches in the shrinked image (w/ learnable interp. linear kernel G)
41
https://arxiv.org/abs/1909.06581
47. 本会議に採択されたDistillationをタイトルに含む論⽂
• Distilling Knowledge From a Deep Pose Regressor Network
• Distill Knowledge From NRSfM for Weakly Supervised 3D Pose Learning
• Learning Lightweight Lane Detection CNNs by Self Attention Distillation
• Knowledge Distillation via Route Constrained Optimization
• Distillation-Based Training for Multi-Exit Architectures
• Similarity-Preserving Knowledge Distillation
• UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation
• A Comprehensive Overhaul of Feature Distillation
• Online Model Distillation for Efficient Video Inference
• Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
• On the Efficacy of Knowledge Distillation
• Correlation Congruence for Knowledge Distillation
• Dynamic Kernel Distillation for Efficient Pose Estimation in Videos
• Relation Distillation Networks for Video Object Detection
• AWSD: Adaptive Weighted Spatiotemporal Distillation for Video Representation
• WSOD2: Learning Bottom-Up and Top-Down Objectness Distillation for Weakly-Supervised Object Detection
47
49. 本会議に採択されたKnowledge Distillation系っぽい
論⽂⼀気読み
• Distilling Knowledge From a Deep Pose Regressor Network
• Distill Knowledge From NRSfM for Weakly Supervised 3D Pose Learning
• Learning Lightweight Lane Detection CNNs by Self Attention Distillation
• Knowledge Distillation via Route Constrained Optimization
• Distillation-Based Training for Multi-Exit Architectures
• Similarity-Preserving Knowledge Distillation
• UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation
• A Comprehensive Overhaul of Feature Distillation
• Online Model Distillation for Efficient Video Inference
• Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
• On the Efficacy of Knowledge Distillation
• Correlation Congruence for Knowledge Distillation
• Dynamic Kernel Distillation for Efficient Pose Estimation in Videos
• Relation Distillation Networks for Video Object Detection
• AWSD: Adaptive Weighted Spatiotemporal Distillation for Video Representation
• WSOD2: Learning Bottom-Up and Top-Down Objectness Distillation for Weakly-Supervised Object Detection
49
後ろ4つは
distillation = refining / aggregation
くらいの意味合いで使ってそう
なので今回はスキップ
51. Learning Lightweight Lane Detection CNNs by Self
Attention Distillation
上層のattentionを下層のattentionでmimicできるように学習する self attention
distillation (SAD)を提案。軽量で⾼精度な⾞線検知モデルの学習に成功。
51
52. Distillation-Based Training for Multi-Exit Architectures
中間層にも出⼒層をもたせることで、途中離脱を可能にしたネットワークアーキ
テクチャにおいて、最終層の出⼒に中間層の出⼒を合わせるように学習すること
でより早期に途中離脱しても精度の良いネットワークが得られると提案。
52
53. Be Your Own Teacher: Improve the Performance of
Convolutional Neural Networks via Self Distillation
neural networksの学習に追いて、中間層の出⼒からの予測が最終層からの予測に
⼀致させるように学習(self distillation)させることで、最終層からの予測精度
が向上した。中間層からの出⼒とアンサンブルすることでさらなる精度改善も実
現。
53
55. Distilling Knowledge From a Deep Pose Regressor
Network
正確な信号が取りにくい⾃⼰位置認識タスクにおいて、teacher networkのロス
をconfidenceにしてstudentを学習するための新たなロス(Attentive Imitation
Loss, AIL)と学習法(Attentive Hint Training, AHT)を提案。ちょっとエンジニアリ
ング要素強め。
55
58. A Comprehensive Overhaul of Feature Distillation
distillationする特徴量を取るべき位置や、distillationの際、teacher/studentの特
徴量をどう変換すべきかについての検討。ReLUが使われているときのハックのよ
うな⼯夫も紹介されている。基本的にはactivationがかかる前の特徴を
distillationにかけるのがよく、その際、変換によって情報を⽋損させないことが
重要。classification, object detection, semantic segmentationなどでteacherの
性能を凌駕するstudentの性能を実現。
58
59. On the Efficacy of Knowledge Distillation
student modelがteacher modelをmimicできるほど強⼒でない場合、teacher
modelへのdistillationを途中で打ち切ること(Early Stopping)が有効であるこ
とを実験的に⽰した。
59
62. Distill Knowledge from NRSfM for Weakly Supervised
3D Pose Learning
2D画像のみからの3Dのpose estimationを学習する場合のdepthの教師値を、3D
の⾮剛体のSfM (Non-Rigid Structure from Motion)モデルから得られた推定値で
代⽤して学習する。NRSfMモデルのdepthの推定値は必ずしも正確ではないの
で、NRSfMモデルの学習時のロスの値を加味したようなフレームワークを提案。
62
63. UM-Adapt: Unsupervised Multi-Task Adaptation Using
Adversarial Cross-Task Distillation
⼈⼯画像/⾃然画像に対するdepth, segmentation, normal predictionのそれぞれ
のタスクのネットワーク出⼒が、それぞれ他の2つのタスクのネットワーク出⼒
と⼀致するようにしつつ、それぞれの近似誤差が⾃然画像/⼈⼯画像間でなるべく
異ならないように学習することで、タスク間の転移学習とドメイン適応を両⽴。
63
64. Online Model Distillation for Efficient Video Inference
特定のvideo streamの各フレームを⾼速・⾼精度にsegmentationできる軽量ネッ
トワークを、推論の重たいteacherネットワークをなるべく起動させずにしつつオ
ンラインにdistillationさせる⽅法の提案。studentの出⼒がteacherの出⼒と接近
するまではteacherを起動し、その後は、出⼒の精度(IoU)に応じて、次回起動
までのタイミングを2倍/0.5倍などするヒューリスティック。
64
65. ICCV2019 Distillation論⽂まとめ
Self-Distillation系
• Distilling Knowledge From a Deep Pose Regressor Network
• Learning Lightweight Lane Detection CNNs by Self Attention Distillation
• Distillation-Based Training for Multi-Exit Architecture
• Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Distillationの仕⽅の改善
• Knowledge Distillation via Route Constrained Optimization
• Similarity-Preserving Knowledge Distillation
• A Comprehensive Overhaul of Feature Distillation
• On the Efficacy of Knowledge Distillation
• Correlation Congruence for Knowledge Distillation
その他
• Distill Knowledge From NRSfM for Weakly Supervised 3D Pose Learning
• UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation
• Online Model Distillation for Efficient Video Inference
65