23. References (時系列予測 Transformers)
• [Vaswani+, NIPS’17] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin.
Attention is all you need. In NIPS, 2017.
• [Li+, NeurIPS’19] S. Li, X. Jin, Y. Xuan, X. Zhou, W. Chen, Y. Wang, and X. Yan. Enhancing the locality and breaking the
memory bottleneck of transformer on time series forecasting. In NeurIPS, 2019.
• [Zhou+, AAAI’21] H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang. Informer: Beyond efficient
transformer for long sequence time-series forecasting. In AAAI, 2021.
• [Kitaev+, ICLR’20] N. Kitaev, L. Kaiser, and A. Levskaya. Reformer: The efficient transformer. In ICLR, 2020.
• [Liu+, ICLR’22] S. Liu, H. Yu, C. Liao, J. Li, W. Lin, A. XLiu, and S. Dustdar. Pyraformer: Low-complexity pyramidal attention
for long-range time series modeling and forecasting. In ICLR, 2022.
• [Wu+, NeurIPS’21] H. Wu, J. Xu, J. Wang, and M. Long. Autoformer: Decomposition transformers with Auto-Correlation for
long-term series forecasting. In NeurIPS, 2021.
• [Zhou+, ICML’22] T. Zhou, Z. Ma, Q. Wen, X. Wang, L. Sun, and R. Jin. FEDformer: Frequency enhanced decomposed
transformer for long-term series forecasting. In ICML, 2022.
• [Woo+, arXiv, 22] G. Woo, C. Liu, D. Sahoo, A. Kumar, and S. C. H. Hoi. Etsformer: Exponential smoothing transformers for
time-series forecasting. arXiv preprint arXiv:1406.1078, 2022.
26
24. References (Others)
• [Lai+, SIGIR’18] G. Lai, W. Chang, Y. Yang, and H. Liu. Modeling long- and short-term temporal patterns with deep neural networks. In SIGIR, 2018.
• [Salinas+, Int. J. Forecast., 20] D. Salinas, V. Flunkert, J. Gasthaus, and T. Januschowski. DeepAR: Probabilistic forecasting with autoregressive
recurrent networks. Int. J. Forecast., Vol. 36, 3, pp.1181-1191, 2020.
• [Oreshkin+, ICLR’20] B. N. Oreshkin, D. Carpov, N. Chapados, and Y. Bengio. N-BEATS: neural basis expansion analysis for interpretable time series
forecasting. In ICLR, 2020.
• [Challu+, arXiv, 22] C. Challu, K. G. Olivares, B. N. Oreshkin, F. Garza, M. Mergenthaler, and A. Dubrawski. N-hits: Neural hierarchical interpolation for
time series forecasting. arXiv preprint arXiv:2201.12886, 2022.
• [Ishida+, ICML’20] T. Ishida, I. Yamane, T. Sakai, G. Niu, and M. Sugiyama. Do We Need Zero Training Loss After Achieving Zero Training Error? In
ICML, 2020.
• [Li+, NIPS’18] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein. Visualizing the Loss Landscape of Neural Nets. In NIPS, 2018.
• [Park+, ICLR’22] N. Park and S. Kim. How do vision transformers work? In ICLR, 2022.
• [Ogasawara+, IJCNN’10] E. Ogasawara, L. C. Martinez, D. de Oliveira, G. Zimbrão, G. L. Pappa, and M. Mattoso. Adaptive Normalization: A novel data
normalization approach for non-stationary time series. In IJCNN, Barcelona, Spain, 2010, pp. 1-8, doi: 10.1109/IJCNN.2010.5596746.
• [Passalis+, IEEE TNNLS’20] N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis. Deep Adaptive Input Normalization for Time Series
Forecasting. In IEEE TNNLS, vol. 31, no. 9, pp. 3760-3765, Sept. 2020, doi: 10.1109/TNNLS.2019.2944933.
• [Kim+, ICLR’22] T. Kim, J. Kim, Y. Tae, C. Park, J. Choi, and J. Choo. Reversible Instance Normalization for Accurate Time-Series Forecasting
against Distribution Shift. In ICLR, 2022.
27