19. 18
実行環境・状況の複雑さ、不確かさ
Czarnecki, K., & Salay, R. (2018). Towards a Framework to Manage Perceptual Uncertainty for Safe
Automated Driving. International Conference on Computer Safety, Reliability, and Security, 439–445.
18
22. 21
DNNの脆弱性:Adversarial Examples (敵対的
標本)
Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Song, D.
Robust Physical-World Attacks on Deep Learning Models, CVPR 2018
Carlini, N., & Wagner, D. (2017). Towards Evaluating the
Robustness of Neural Networks. Proceedings - IEEE
Symposium on Security and Privacy, 39–57.
23. 22
特定の特徴をもたせた敵対的標本
Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2019). A general framework for adversarial examples with
objectives. ACM Transactions on Privacy and Security, 22(3). https://doi.org/10.1145/3317611
28. 研究分野の動向
新しく立ち上がった国際会議
The AAAI's Workshop on Artificial Intelligence Safety:
https://safeai.webs.upv.es/
International Workshop on Artificial Intelligence Safety Engineering (WAISE)
@ SAFECOMP: https://www.waise.org/
AISafety @IJCAI: https://www.aisafetyw.org/
2020 USENIX Conference on Operational Machine Learning:
https://www.usenix.org/conference/opml20
The Conference on Systems and Machine Learning (SysML):
https://mlsys.org/
Safe AIに関するセンター
Center for AI Safety (Stanford University, USA): http://aisafety.stanford.edu/
PRECISE Center of Safe AI (University of Pennsylvania, USA):
https://precise.seas.upenn.edu/safe-autonomy
コミュニティ
The Software Engineering for Machine Learning Applications (Polytechnique
Montreal, Canada) https://semla.polymtl.ca/organizers/
27
31. 保証範囲の明確化
Rahimi, M., & Chechik, M. (2019). Toward Requirements Specification for Machine-Learned Components. In 27th International Requirements Engineering Conference (pp. 241–244).
30
32. 原因追求:原因、不都合の分類
Nargiz Humbatova, Gunel Jahangirova, Gabriele Bavota, Vincenzo Riccio, Andrea Stocco,
Paolo Tonella, Taxonomy of Real Faults in Deep Learning Systems, ICSE 2020
Md Johirul Islam, Rangeet Pan, Giang Nguyen, Hridesh Rajan, Repairing Deep
Neural, Networks: Fix Patterns and Challenges, ICSE 2020
31
33. 原因追求: 意味を考えて何を間違いやすいか分析
Cynthia C. S. Liem and Annibale Panichella, Oracle Issues in Machine Learning and
Where to Find Them, 8th International Workshop on Realizing Artificial Intelligence
Synergies in Software Engineering, 2020
32
34. シナリオベースの影響分析
Ribeiro, M. T., & Guestrin, C. (2016). “Why Should I Trust You?” Explaining the
Predictions of Any Classifier. In the 22nd ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining - KDD ’16 (pp. 1135–1144).
出力に寄与している入力を抽出
出力に寄与している訓練データを抽出
Pang Wei Koh, Percy Liang, Understanding Black-box Predictions via Influence Functions,
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1885-1894, 2017.
33
35. 解析可能なモデルを変換・抽出
WFAで
モデル抽出
Takamasa Okudono, Masaki Waga, Taro Sekiyama, Ichiro Hasuo:
Weighted Automata Extraction from Recurrent Neural Networks via
Regression, AAAI 2020
34
Satoshi Hara, Kohei Hayashi, Making Tree Ensembles Interpretable: A Bayesian
Model Selection Approach, Proceedings of the Twenty-First International Conference
on Artificial Intelligence and Statistics, PMLR 84:77-85, 2018.