18. 感想
• シンプルな手法で大きな成果を上げていてすごい.
• 各ピクセルごとに領域を抽出するRetinaNetはFocal Lossと相
性が良さそう.
– 大量に無駄な領域を抽出してもFocal Lossで調整できる.
• 分類やセグメンテーションなど他のタスクにも応用できそう
– X. Zhou et al. Focal FCN: Towards Small Object Segmentation with
Limited Training Data, arXiv, 2017.
– 多クラス問題の場合,超パラメータの探索が課題
18
19. 参考文献
T. Lin et al. Focal Loss for Dense Object Detection. In ICCV, 2017.
J. Redmon and A. Farhadi. YOLO9000: Better, faster, stronger. In CVPR, 2017.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object
detection. In CVPR, 2016.
S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with
region proposal net- works. In NIPS, 2015.
J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z.Wojna, Y. Song, S.
Guadarrama, and K. Murphy. Speed/accuracy trade-offs for modern convolu- tional object
detectors. 2017.
X. Zhou et al. Focal FCN: Towards Small Object Segmentation with Limited Training Data. arXiv, 2017.
19