SlideShare ist ein Scribd-Unternehmen logo
1 von 16
Downloaden Sie, um offline zu lesen
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
論文紹介
A Transformer-based Framework for Multivariate
Time Series Representation Learning
北海道大学 大学院情報科学研究院
情報理工学部門 複合情報工学分野 調和系工学研究室
劉兆邦
2022年06月20日
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
• 著者
– George Zerveas, Srideepika Jayaraman, Dhaval Patel,
Anuradha Bhamidipaty, Carsten Eickhoff
• 発表
– Proceedings of the 27th ACM SIGKDD Conference on
Knowledge Discovery & Data Mining
• 論文リンク
– https://dl.acm.org/doi/abs/10.1145/3447548.3467401?casa_t
oken=HbWWl3ksNy4AAAAA:watSSa0fom_EbxcyDmj8vMTSm
hxjuj0XzZ5lpJYCtzSIEvwys4my5p8ksSsfSLsdfZAPpQokiQEo
Paper information 2
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
• A novel framework for multivariate time series representation learning
based on the transformer encoder architecture
• The framework includes an unsupervised pre-training scheme, which
can offer substantial performance benefits over fully supervised
learning on downstream tasks
• Performs significantly better than the best currently available methods
for regression and classification
• The first unsupervised method shown to push the limits of state-of-
the-art performance for multivariate time series regression and
classification
Abstract 3
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Unlike in domains such as Computer Vision or Natural Language
Processing (NLP), the dominance of deep learning for time series
is far from established
Non-deep learning methods such as TS-CHIEF, HIVE-COTE, and ROCKET
currently hold the record on time series regression and classification
dataset benchmarks
Transformer models are based on a multi-headed attention mechanism
that renders them particularly suitable for time series data
Develop a generally applicable methodology (framework) that can
leverage unlabeled data by first training a transformer encoder to extract
dense vector representations of multivariate time series through an input
“denoising” (autoregressive) objective.
Introduction 4
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
5
Methodology-Base model
However, the decoder module needs the (masked) “ground truth” output sequence as
an input, and is thus unsuitable for tasks such as classification or (extrinsic) regression.
[1]Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information
processing systems, 2017, 30.
Encoder
Decoder
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Methodology-Base model 6
各時系列の長さ
はw,時系列変
数はm個ある
線形変化
畳み込み
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Methodology-Base model 7
Positional encodings
Based on the performance of our models, we also observe that the positional
encodings generally appear not to significantly interfere with the numerical
information of the time series,
Padding
• After setting a maximum sequence length 𝑤 for the entire dataset, shorter
samples are padded with arbitrary values
• Generate a padding mask which adds a large negative value to the attention
scores for the padded positions, before computing the self-attention distribution
with the softmax function
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Methodology-Regression and classification 8
出力のzを一つのベクトルと連結して、線形変化層の中に
入って、モデルをレグレッションと分類タスクに変更する
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Methodology-Unsupervised pre-training 9
We set part of the input to 0 and ask the
model to predict the masked values
A binary noise mask, is created
independently for each training sample and
epoch, and the input is masked by
elementwise multiplication:
各行に0のsegmentsの長さの平均値
各行1のsegmentsの長さの平均値
Masking 割合
𝑟 ∗ 𝑚 各列Maskingした変数の平均値
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
10
Methodology-Unsupervised pre-training
We chose this masking pattern because it encourages the model to learn to attend
both to preceding and succeeding segments in individual variables, as well as to
existing contemporary values of the other variables in the time series, and thereby
to learn to model inter-dependencies between variables.
Maskingした部分
のLossだけを計算
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
11
Experiments & Results-Regression
TST (Time Series Transformer)
• proposed approach achieves an average rank of 1.33
• pre-trained transformer models outperform the fully
supervised ones in 3 out of 6 datasets
‒ no additional samples are used for pretraining
average relative difference from mean
低いと、平均的な
効果が良い(平均
RMSEとの差)
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
12
Experiments & Results-Regression
Q1: Given a partially labeled dataset of a certain size, how will additional
labels affect performance?
• As expected, with an increasing proportion of available labels performance improves both
for a fully supervised model, as well as the same model that has been first pre-trained on
the entire training set through the unsupervised objective and then fine-tuned
• not only does the pretrained model outperform the fully supervised one, but the
benefit persists throughout the entire range of label availability, even when the
models are allowed to use all labels
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
13
Experiments & Results-Regression
Q2: Given a labeled dataset, how will additional unlabeled samples
affect performance?
• for a given number of labels (shown as a percentage of the totally available labels),
the more data samples are used for unsupervised learning, the lower the error
achieved
• reusing a subset of the same samples for unsupervised pretraining improves
performance
fully supervised
training only
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
14
Experiments & Results-Classification
• performed best on 7 out of the 11 datasets, achieving an average rank of 1.7
• We believe that this indicates a relative weakness of our current models when
dealing with very low dimensional time series(3-dimensional)
• Finally, we observe that the pre-trained transformer models performed better
than the fully supervised ones in 8 out of 11 datasets,sometimes by a substantial
margin.
‾ suggesting the benefit to originate from merely reusing the same samples in a
different training task
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
15
Additional points
Execution time on Tesla P100 GPU
In practice,despite allowing for many hundreds of epochs, using a GPU we never
trained our models longer than 3 hours on any of the examined datasets
Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved.
Conclusion 16
➢ Propose a transformer-based framework for unsupervised representation
learning of multivariate time series
➢ Employing unsupervised learning of multivariate time series that
surpasses the performance of all current state-of-the-art supervised
methods
➢ Unsupervised pre-training of our transformer models offers a substantial
performance benefit over fully supervised learning, even without
leveraging additional unlabeled data,
➢ the proposed framework can be readily used for additional downstream
tasks, such as forecasting, clustering and missing value imputation

Weitere ähnliche Inhalte

Was ist angesagt?

[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...
[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...
[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...Deep Learning JP
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoderSho Tatsuno
 
[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from Pixels[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from PixelsDeep Learning JP
 
Attentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門までAttentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門までAGIRobots
 
【論文読み会】Self-Attention Generative Adversarial Networks
【論文読み会】Self-Attention Generative  Adversarial Networks【論文読み会】Self-Attention Generative  Adversarial Networks
【論文読み会】Self-Attention Generative Adversarial NetworksARISE analytics
 
機械学習を民主化する取り組み
機械学習を民主化する取り組み機械学習を民主化する取り組み
機械学習を民主化する取り組みYoshitaka Ushiku
 
[DeepLearning論文読み会] Dataset Distillation
[DeepLearning論文読み会] Dataset Distillation[DeepLearning論文読み会] Dataset Distillation
[DeepLearning論文読み会] Dataset DistillationRyutaro Yamauchi
 
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...Deep Learning JP
 
CNNの構造最適化手法について
CNNの構造最適化手法についてCNNの構造最適化手法について
CNNの構造最適化手法についてMasanoriSuganuma
 
[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...
[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...
[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...Deep Learning JP
 
[DL輪読会]When Does Label Smoothing Help?
[DL輪読会]When Does Label Smoothing Help?[DL輪読会]When Does Label Smoothing Help?
[DL輪読会]When Does Label Smoothing Help?Deep Learning JP
 
IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習Preferred Networks
 
Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)Yamato OKAMOTO
 
【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Modelscvpaper. challenge
 
“機械学習の説明”の信頼性
“機械学習の説明”の信頼性“機械学習の説明”の信頼性
“機械学習の説明”の信頼性Satoshi Hara
 
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料Yusuke Uchida
 
[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習Deep Learning JP
 
Active Learning 入門
Active Learning 入門Active Learning 入門
Active Learning 入門Shuyo Nakatani
 
AHC-Lab M1勉強会 論文の読み方・書き方
AHC-Lab M1勉強会 論文の読み方・書き方AHC-Lab M1勉強会 論文の読み方・書き方
AHC-Lab M1勉強会 論文の読み方・書き方Shinagawa Seitaro
 
[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...
[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...
[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...Deep Learning JP
 

Was ist angesagt? (20)

[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...
[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...
[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of De...
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder
 
[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from Pixels[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from Pixels
 
Attentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門までAttentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門まで
 
【論文読み会】Self-Attention Generative Adversarial Networks
【論文読み会】Self-Attention Generative  Adversarial Networks【論文読み会】Self-Attention Generative  Adversarial Networks
【論文読み会】Self-Attention Generative Adversarial Networks
 
機械学習を民主化する取り組み
機械学習を民主化する取り組み機械学習を民主化する取り組み
機械学習を民主化する取り組み
 
[DeepLearning論文読み会] Dataset Distillation
[DeepLearning論文読み会] Dataset Distillation[DeepLearning論文読み会] Dataset Distillation
[DeepLearning論文読み会] Dataset Distillation
 
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
 
CNNの構造最適化手法について
CNNの構造最適化手法についてCNNの構造最適化手法について
CNNの構造最適化手法について
 
[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...
[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...
[DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Language Und...
 
[DL輪読会]When Does Label Smoothing Help?
[DL輪読会]When Does Label Smoothing Help?[DL輪読会]When Does Label Smoothing Help?
[DL輪読会]When Does Label Smoothing Help?
 
IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習
 
Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)
 
【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models
 
“機械学習の説明”の信頼性
“機械学習の説明”の信頼性“機械学習の説明”の信頼性
“機械学習の説明”の信頼性
 
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
 
[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習
 
Active Learning 入門
Active Learning 入門Active Learning 入門
Active Learning 入門
 
AHC-Lab M1勉強会 論文の読み方・書き方
AHC-Lab M1勉強会 論文の読み方・書き方AHC-Lab M1勉強会 論文の読み方・書き方
AHC-Lab M1勉強会 論文の読み方・書き方
 
[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...
[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...
[DL輪読会]Factorized Variational Autoencoders for Modeling Audience Reactions to...
 

Ähnlich wie A Transformer-based Framework for Multivariate Time Series Representation Learning

HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...
HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...
HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...harmonylab
 
A Learning-based Iterative Method for Solving Vehicle Routing Problems
A Learning-based Iterative Method for Solving Vehicle Routing ProblemsA Learning-based Iterative Method for Solving Vehicle Routing Problems
A Learning-based Iterative Method for Solving Vehicle Routing Problemsharmonylab
 
Forecasting across time series databases using recurrent neural networks on g...
Forecasting across time series databases using recurrent neural networks on g...Forecasting across time series databases using recurrent neural networks on g...
Forecasting across time series databases using recurrent neural networks on g...harmonylab
 
A hybrid model for building energy consumption forecasting using long short t...
A hybrid model for building energy consumption forecasting using long short t...A hybrid model for building energy consumption forecasting using long short t...
A hybrid model for building energy consumption forecasting using long short t...harmonylab
 
Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...
Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...
Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...harmonylab
 
Learning to Incetivize Other Learning Agents
Learning to Incetivize Other Learning AgentsLearning to Incetivize Other Learning Agents
Learning to Incetivize Other Learning Agentsharmonylab
 
Deep High Resolution Representation Learning for Human Pose Estimation
Deep High Resolution Representation Learning for Human Pose EstimationDeep High Resolution Representation Learning for Human Pose Estimation
Deep High Resolution Representation Learning for Human Pose Estimationharmonylab
 
Anomaly Detection for an E-commerce Pricing System
Anomaly Detection for an E-commerce Pricing SystemAnomaly Detection for an E-commerce Pricing System
Anomaly Detection for an E-commerce Pricing Systemharmonylab
 
Disentangling semantics and syntax in sentence embeddings with pre trained la...
Disentangling semantics and syntax in sentence embeddings with pre trained la...Disentangling semantics and syntax in sentence embeddings with pre trained la...
Disentangling semantics and syntax in sentence embeddings with pre trained la...harmonylab
 
Personalized outfit recommendation with learnable anchors
Personalized outfit recommendation with learnable anchorsPersonalized outfit recommendation with learnable anchors
Personalized outfit recommendation with learnable anchorsharmonylab
 
IRJET- Placement Portal and Prediction System
IRJET- Placement Portal and Prediction SystemIRJET- Placement Portal and Prediction System
IRJET- Placement Portal and Prediction SystemIRJET Journal
 
Hybrid-Training & Placement Management with Prediction System
Hybrid-Training & Placement Management with Prediction SystemHybrid-Training & Placement Management with Prediction System
Hybrid-Training & Placement Management with Prediction SystemIRJET Journal
 
Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...
Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...
Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...IRJET Journal
 
Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...
Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...
Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...harmonylab
 
From street photos to fashion trends leveraging user provided noisy labels fo...
From street photos to fashion trends leveraging user provided noisy labels fo...From street photos to fashion trends leveraging user provided noisy labels fo...
From street photos to fashion trends leveraging user provided noisy labels fo...harmonylab
 
IRJET- Placement Recommender and Evaluator
IRJET- Placement Recommender and EvaluatorIRJET- Placement Recommender and Evaluator
IRJET- Placement Recommender and EvaluatorIRJET Journal
 
IRJET - Student Future Prediction System under Filtering Mechanism
IRJET - Student Future Prediction System under Filtering MechanismIRJET - Student Future Prediction System under Filtering Mechanism
IRJET - Student Future Prediction System under Filtering MechanismIRJET Journal
 
IRJET - Design and Development of Android Application for Face Detection and ...
IRJET - Design and Development of Android Application for Face Detection and ...IRJET - Design and Development of Android Application for Face Detection and ...
IRJET - Design and Development of Android Application for Face Detection and ...IRJET Journal
 
M.Tech computer-science-and-software-engineering
M.Tech computer-science-and-software-engineeringM.Tech computer-science-and-software-engineering
M.Tech computer-science-and-software-engineeringJECRC UNIVERSITY, RAJASTHAN
 

Ähnlich wie A Transformer-based Framework for Multivariate Time Series Representation Learning (20)

HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...
HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...
HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Est...
 
A Learning-based Iterative Method for Solving Vehicle Routing Problems
A Learning-based Iterative Method for Solving Vehicle Routing ProblemsA Learning-based Iterative Method for Solving Vehicle Routing Problems
A Learning-based Iterative Method for Solving Vehicle Routing Problems
 
Forecasting across time series databases using recurrent neural networks on g...
Forecasting across time series databases using recurrent neural networks on g...Forecasting across time series databases using recurrent neural networks on g...
Forecasting across time series databases using recurrent neural networks on g...
 
A hybrid model for building energy consumption forecasting using long short t...
A hybrid model for building energy consumption forecasting using long short t...A hybrid model for building energy consumption forecasting using long short t...
A hybrid model for building energy consumption forecasting using long short t...
 
Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...
Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...
Intention Nets: Psychology-Inspired User Choice Behavior Modeling for Next-Ba...
 
Learning to Incetivize Other Learning Agents
Learning to Incetivize Other Learning AgentsLearning to Incetivize Other Learning Agents
Learning to Incetivize Other Learning Agents
 
Deep High Resolution Representation Learning for Human Pose Estimation
Deep High Resolution Representation Learning for Human Pose EstimationDeep High Resolution Representation Learning for Human Pose Estimation
Deep High Resolution Representation Learning for Human Pose Estimation
 
Anomaly Detection for an E-commerce Pricing System
Anomaly Detection for an E-commerce Pricing SystemAnomaly Detection for an E-commerce Pricing System
Anomaly Detection for an E-commerce Pricing System
 
Disentangling semantics and syntax in sentence embeddings with pre trained la...
Disentangling semantics and syntax in sentence embeddings with pre trained la...Disentangling semantics and syntax in sentence embeddings with pre trained la...
Disentangling semantics and syntax in sentence embeddings with pre trained la...
 
Personalized outfit recommendation with learnable anchors
Personalized outfit recommendation with learnable anchorsPersonalized outfit recommendation with learnable anchors
Personalized outfit recommendation with learnable anchors
 
IRJET- Placement Portal and Prediction System
IRJET- Placement Portal and Prediction SystemIRJET- Placement Portal and Prediction System
IRJET- Placement Portal and Prediction System
 
Hybrid-Training & Placement Management with Prediction System
Hybrid-Training & Placement Management with Prediction SystemHybrid-Training & Placement Management with Prediction System
Hybrid-Training & Placement Management with Prediction System
 
M.Tech digital-communication-and-vlsi-design
M.Tech digital-communication-and-vlsi-designM.Tech digital-communication-and-vlsi-design
M.Tech digital-communication-and-vlsi-design
 
Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...
Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...
Vibration Analysis for condition Monitoring & Predictive Maintenance using Em...
 
Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...
Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...
Tell Me What They’re Holding: Weakly Supervised Object Detection with Transfe...
 
From street photos to fashion trends leveraging user provided noisy labels fo...
From street photos to fashion trends leveraging user provided noisy labels fo...From street photos to fashion trends leveraging user provided noisy labels fo...
From street photos to fashion trends leveraging user provided noisy labels fo...
 
IRJET- Placement Recommender and Evaluator
IRJET- Placement Recommender and EvaluatorIRJET- Placement Recommender and Evaluator
IRJET- Placement Recommender and Evaluator
 
IRJET - Student Future Prediction System under Filtering Mechanism
IRJET - Student Future Prediction System under Filtering MechanismIRJET - Student Future Prediction System under Filtering Mechanism
IRJET - Student Future Prediction System under Filtering Mechanism
 
IRJET - Design and Development of Android Application for Face Detection and ...
IRJET - Design and Development of Android Application for Face Detection and ...IRJET - Design and Development of Android Application for Face Detection and ...
IRJET - Design and Development of Android Application for Face Detection and ...
 
M.Tech computer-science-and-software-engineering
M.Tech computer-science-and-software-engineeringM.Tech computer-science-and-software-engineering
M.Tech computer-science-and-software-engineering
 

Mehr von harmonylab

【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也
【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也
【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也harmonylab
 
【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究
【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究
【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究harmonylab
 
A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...
A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...
A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...harmonylab
 
【卒業論文】印象タグを用いた衣服画像生成システムに関する研究
【卒業論文】印象タグを用いた衣服画像生成システムに関する研究【卒業論文】印象タグを用いた衣服画像生成システムに関する研究
【卒業論文】印象タグを用いた衣服画像生成システムに関する研究harmonylab
 
【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究
【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究
【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究harmonylab
 
DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...
DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...
DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...harmonylab
 
DLゼミ: MobileOne: An Improved One millisecond Mobile Backbone
DLゼミ: MobileOne: An Improved One millisecond Mobile BackboneDLゼミ: MobileOne: An Improved One millisecond Mobile Backbone
DLゼミ: MobileOne: An Improved One millisecond Mobile Backboneharmonylab
 
DLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat Models
DLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat ModelsDLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat Models
DLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat Modelsharmonylab
 
DLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
DLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose EstimationDLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
DLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose Estimationharmonylab
 
Voyager: An Open-Ended Embodied Agent with Large Language Models
Voyager: An Open-Ended Embodied Agent with Large Language ModelsVoyager: An Open-Ended Embodied Agent with Large Language Models
Voyager: An Open-Ended Embodied Agent with Large Language Modelsharmonylab
 
DLゼミ: Ego-Body Pose Estimation via Ego-Head Pose Estimation
DLゼミ: Ego-Body Pose Estimation via Ego-Head Pose EstimationDLゼミ: Ego-Body Pose Estimation via Ego-Head Pose Estimation
DLゼミ: Ego-Body Pose Estimation via Ego-Head Pose Estimationharmonylab
 
ReAct: Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language ModelsReAct: Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language Modelsharmonylab
 
形態素解析を用いた帝国議会議事速記録の変遷に関する研究
形態素解析を用いた帝国議会議事速記録の変遷に関する研究形態素解析を用いた帝国議会議事速記録の変遷に関する研究
形態素解析を用いた帝国議会議事速記録の変遷に関する研究harmonylab
 
【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究
【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究
【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究harmonylab
 
灯油タンク内の液面高計測を用いた 灯油残量推定システムに関する研究
灯油タンク内の液面高計測を用いた灯油残量推定システムに関する研究灯油タンク内の液面高計測を用いた灯油残量推定システムに関する研究
灯油タンク内の液面高計測を用いた 灯油残量推定システムに関する研究harmonylab
 
深層自己回帰モデルを用いた俳句の生成と評価に関する研究
深層自己回帰モデルを用いた俳句の生成と評価に関する研究深層自己回帰モデルを用いた俳句の生成と評価に関する研究
深層自己回帰モデルを用いた俳句の生成と評価に関する研究harmonylab
 
競輪におけるレーティングシステムを用いた予想記事生成に関する研究
競輪におけるレーティングシステムを用いた予想記事生成に関する研究競輪におけるレーティングシステムを用いた予想記事生成に関する研究
競輪におけるレーティングシステムを用いた予想記事生成に関する研究harmonylab
 
【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究
【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究
【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究harmonylab
 
A Study on Estimation of Household Kerosene Consumption for Optimization of D...
A Study on Estimation of Household Kerosene Consumption for Optimization of D...A Study on Estimation of Household Kerosene Consumption for Optimization of D...
A Study on Estimation of Household Kerosene Consumption for Optimization of D...harmonylab
 
マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究
マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究
マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究harmonylab
 

Mehr von harmonylab (20)

【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也
【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也
【修士論文】代替出勤者の選定業務における依頼順決定方法に関する研究   千坂知也
 
【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究
【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究
【修士論文】経路探索のための媒介中心性に基づく道路ネットワーク階層化手法に関する研究
 
A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...
A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...
A Study on Decision Support System for Snow Removal Dispatch using Road Surfa...
 
【卒業論文】印象タグを用いた衣服画像生成システムに関する研究
【卒業論文】印象タグを用いた衣服画像生成システムに関する研究【卒業論文】印象タグを用いた衣服画像生成システムに関する研究
【卒業論文】印象タグを用いた衣服画像生成システムに関する研究
 
【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究
【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究
【卒業論文】大規模言語モデルを用いたマニュアル文章修正手法に関する研究
 
DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...
DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...
DLゼミ:Primitive Generation and Semantic-related Alignment for Universal Zero-S...
 
DLゼミ: MobileOne: An Improved One millisecond Mobile Backbone
DLゼミ: MobileOne: An Improved One millisecond Mobile BackboneDLゼミ: MobileOne: An Improved One millisecond Mobile Backbone
DLゼミ: MobileOne: An Improved One millisecond Mobile Backbone
 
DLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat Models
DLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat ModelsDLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat Models
DLゼミ: Llama 2: Open Foundation and Fine-Tuned Chat Models
 
DLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
DLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose EstimationDLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
DLゼミ: ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
 
Voyager: An Open-Ended Embodied Agent with Large Language Models
Voyager: An Open-Ended Embodied Agent with Large Language ModelsVoyager: An Open-Ended Embodied Agent with Large Language Models
Voyager: An Open-Ended Embodied Agent with Large Language Models
 
DLゼミ: Ego-Body Pose Estimation via Ego-Head Pose Estimation
DLゼミ: Ego-Body Pose Estimation via Ego-Head Pose EstimationDLゼミ: Ego-Body Pose Estimation via Ego-Head Pose Estimation
DLゼミ: Ego-Body Pose Estimation via Ego-Head Pose Estimation
 
ReAct: Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language ModelsReAct: Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language Models
 
形態素解析を用いた帝国議会議事速記録の変遷に関する研究
形態素解析を用いた帝国議会議事速記録の変遷に関する研究形態素解析を用いた帝国議会議事速記録の変遷に関する研究
形態素解析を用いた帝国議会議事速記録の変遷に関する研究
 
【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究
【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究
【卒業論文】深層生成モデルを用いたユーザ意図に基づく衣服画像の生成に関する研究
 
灯油タンク内の液面高計測を用いた 灯油残量推定システムに関する研究
灯油タンク内の液面高計測を用いた灯油残量推定システムに関する研究灯油タンク内の液面高計測を用いた灯油残量推定システムに関する研究
灯油タンク内の液面高計測を用いた 灯油残量推定システムに関する研究
 
深層自己回帰モデルを用いた俳句の生成と評価に関する研究
深層自己回帰モデルを用いた俳句の生成と評価に関する研究深層自己回帰モデルを用いた俳句の生成と評価に関する研究
深層自己回帰モデルを用いた俳句の生成と評価に関する研究
 
競輪におけるレーティングシステムを用いた予想記事生成に関する研究
競輪におけるレーティングシステムを用いた予想記事生成に関する研究競輪におけるレーティングシステムを用いた予想記事生成に関する研究
競輪におけるレーティングシステムを用いた予想記事生成に関する研究
 
【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究
【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究
【卒業論文】B2Bオークションにおけるユーザ別 入札行動予測に関する研究
 
A Study on Estimation of Household Kerosene Consumption for Optimization of D...
A Study on Estimation of Household Kerosene Consumption for Optimization of D...A Study on Estimation of Household Kerosene Consumption for Optimization of D...
A Study on Estimation of Household Kerosene Consumption for Optimization of D...
 
マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究
マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究
マルチエージェント深層強化学習による自動運転車両の追越行動の獲得に関する研究
 

Kürzlich hochgeladen

Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 

Kürzlich hochgeladen (20)

Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 

A Transformer-based Framework for Multivariate Time Series Representation Learning

  • 1. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 論文紹介 A Transformer-based Framework for Multivariate Time Series Representation Learning 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 調和系工学研究室 劉兆邦 2022年06月20日
  • 2. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. • 著者 – George Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, Carsten Eickhoff • 発表 – Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining • 論文リンク – https://dl.acm.org/doi/abs/10.1145/3447548.3467401?casa_t oken=HbWWl3ksNy4AAAAA:watSSa0fom_EbxcyDmj8vMTSm hxjuj0XzZ5lpJYCtzSIEvwys4my5p8ksSsfSLsdfZAPpQokiQEo Paper information 2
  • 3. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. • A novel framework for multivariate time series representation learning based on the transformer encoder architecture • The framework includes an unsupervised pre-training scheme, which can offer substantial performance benefits over fully supervised learning on downstream tasks • Performs significantly better than the best currently available methods for regression and classification • The first unsupervised method shown to push the limits of state-of- the-art performance for multivariate time series regression and classification Abstract 3
  • 4. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Unlike in domains such as Computer Vision or Natural Language Processing (NLP), the dominance of deep learning for time series is far from established Non-deep learning methods such as TS-CHIEF, HIVE-COTE, and ROCKET currently hold the record on time series regression and classification dataset benchmarks Transformer models are based on a multi-headed attention mechanism that renders them particularly suitable for time series data Develop a generally applicable methodology (framework) that can leverage unlabeled data by first training a transformer encoder to extract dense vector representations of multivariate time series through an input “denoising” (autoregressive) objective. Introduction 4
  • 5. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 5 Methodology-Base model However, the decoder module needs the (masked) “ground truth” output sequence as an input, and is thus unsuitable for tasks such as classification or (extrinsic) regression. [1]Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30. Encoder Decoder
  • 6. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Methodology-Base model 6 各時系列の長さ はw,時系列変 数はm個ある 線形変化 畳み込み
  • 7. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Methodology-Base model 7 Positional encodings Based on the performance of our models, we also observe that the positional encodings generally appear not to significantly interfere with the numerical information of the time series, Padding • After setting a maximum sequence length 𝑤 for the entire dataset, shorter samples are padded with arbitrary values • Generate a padding mask which adds a large negative value to the attention scores for the padded positions, before computing the self-attention distribution with the softmax function
  • 8. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Methodology-Regression and classification 8 出力のzを一つのベクトルと連結して、線形変化層の中に 入って、モデルをレグレッションと分類タスクに変更する
  • 9. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Methodology-Unsupervised pre-training 9 We set part of the input to 0 and ask the model to predict the masked values A binary noise mask, is created independently for each training sample and epoch, and the input is masked by elementwise multiplication: 各行に0のsegmentsの長さの平均値 各行1のsegmentsの長さの平均値 Masking 割合 𝑟 ∗ 𝑚 各列Maskingした変数の平均値
  • 10. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 10 Methodology-Unsupervised pre-training We chose this masking pattern because it encourages the model to learn to attend both to preceding and succeeding segments in individual variables, as well as to existing contemporary values of the other variables in the time series, and thereby to learn to model inter-dependencies between variables. Maskingした部分 のLossだけを計算
  • 11. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 11 Experiments & Results-Regression TST (Time Series Transformer) • proposed approach achieves an average rank of 1.33 • pre-trained transformer models outperform the fully supervised ones in 3 out of 6 datasets ‒ no additional samples are used for pretraining average relative difference from mean 低いと、平均的な 効果が良い(平均 RMSEとの差)
  • 12. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 12 Experiments & Results-Regression Q1: Given a partially labeled dataset of a certain size, how will additional labels affect performance? • As expected, with an increasing proportion of available labels performance improves both for a fully supervised model, as well as the same model that has been first pre-trained on the entire training set through the unsupervised objective and then fine-tuned • not only does the pretrained model outperform the fully supervised one, but the benefit persists throughout the entire range of label availability, even when the models are allowed to use all labels
  • 13. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 13 Experiments & Results-Regression Q2: Given a labeled dataset, how will additional unlabeled samples affect performance? • for a given number of labels (shown as a percentage of the totally available labels), the more data samples are used for unsupervised learning, the lower the error achieved • reusing a subset of the same samples for unsupervised pretraining improves performance fully supervised training only
  • 14. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 14 Experiments & Results-Classification • performed best on 7 out of the 11 datasets, achieving an average rank of 1.7 • We believe that this indicates a relative weakness of our current models when dealing with very low dimensional time series(3-dimensional) • Finally, we observe that the pre-trained transformer models performed better than the fully supervised ones in 8 out of 11 datasets,sometimes by a substantial margin. ‾ suggesting the benefit to originate from merely reusing the same samples in a different training task
  • 15. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. 15 Additional points Execution time on Tesla P100 GPU In practice,despite allowing for many hundreds of epochs, using a GPU we never trained our models longer than 3 hours on any of the examined datasets
  • 16. Copyright © 2020 調和系工学研究室 - 北海道大学 大学院情報科学研究院 情報理工学部門 複合情報工学分野 – All rights reserved. Conclusion 16 ➢ Propose a transformer-based framework for unsupervised representation learning of multivariate time series ➢ Employing unsupervised learning of multivariate time series that surpasses the performance of all current state-of-the-art supervised methods ➢ Unsupervised pre-training of our transformer models offers a substantial performance benefit over fully supervised learning, even without leveraging additional unlabeled data, ➢ the proposed framework can be readily used for additional downstream tasks, such as forecasting, clustering and missing value imputation