【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...
【DL輪読会】InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
1. 1
DEEP LEARNING JP
[DL Papers]
http://deeplearning.jp/
InfoBERT: Improving Robustness of Language Models from
An InformationTheoretic Perspective
Kazutoshi Shinoda, Aizawa Lab
2. 書誌情報
• InfoBERT: Improving Robustness of Language Models from An Information
Theoretic Perspective
• 著者: Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li,
Jingjing Liu
• 所属: University of Illinois at Urbana-Champaign, Microsoft Dynamics 365
AI Research, Virginia Tech
• ICLR 2021
2