20. ▪ 主に六つの視点から比較する
Flexibility:様々な構造のネットワークに適用できるか
▪ No Pre-train: pre-trainedのモデルが必要か
Full Exploration: 全ての真データ(特にclean hard sample)も学習できたか
No Supervision: 補助のclean datasetもしくはノイズ率が必要か
Heavy Noise: 高ノイズ率のデータセットも対応できるか
Complex Noise: いろんなタイプのノイズも対応できるか
20
手法の比較
22. [1] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee, “Learning from Noisy Labels with
Deep Neural Networks: A Survey”, TNNLS journal
[2] Z. Zhang and M. Sabuncu, “Generalized cross entropy loss for training deep neural networks with
noisy labels,” in Proc. NeurIPS, 2018, pp. 8778–8788.
[3] A. Ghosh, H. Kumar, and P. Sastry, “Robust loss functions under label noise for deep neural
networks,” in Proc. AAAI, 2017.
[4] Yang Liu, Hongyi Guo, “Peer Loss Functions: Learning from Noisy Labels without Knowing Noise
Rates”, in Proc. ICML, 2020.
[5] P. Chen, J. Ye, G. Chen, J. Zhao, and P.-A. Heng, “Beyond class-conditional assumption: A primary
attempt to combat instancedependent label noise,” in Proc. AAAI, 2021.
[6] H. Wei, L. Tao, R. Xie, and B. An, “Open-set label noise can improve robustness against inherent label
noise,” in Proc. NeurIPS, 2021.
[7] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama, “Co-teaching: Robust training
of deep neural networks with extremely noisy labels,” in Proc. NeurIPS, 2018, pp. 8527–8537.
[8] Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama, “Sample
Selection with Uncertainty of Losses for Learning with Noisy Labels”, ,” in Proc. ICLR, 2022
22
reference